In [None]:
1
Mounted at /content/drive
연결된 그래픽 카드와 CUDA 버전 확인하기
In [None]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Wed Apr 7 11:23:37 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.67 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 69C P8 34W / 149W | 0MiB / 11441MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
In [None]:
1
2
3
4
5
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
torch, torchvision, torchtext, 버전 확인
In [None]:
1
2
3
torch version: 1.8.1+cu101
torchvision version: 0.9.1+cu101
torchtext version: 0.9.1
Basic 1. Tensor
Tensor vs. Ndarray
In [None]:
In [None]:
1
2
3
4
5
6
7
8
9
random base data
46 13 8 58 43 78 52 97 29 10
15 51 70 14 3 16 88 29 71 58
86 63 96 83 2 80 56 18 63 70
48 88 52 84 79 70 72 50 48 97
75 67 59 41 89 18 67 85 46 12
39 31 16 33 91 0 14 46 41 27
65 18 73 77 63 69 42 42 90 49
90 49 8 19 47 60 77 64 29 93
In [None]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
casted to ndarray:
[[46 13 8 58 43 78 52 97 29 10]
[15 51 70 14 3 16 88 29 71 58]
[86 63 96 83 2 80 56 18 63 70]
[48 88 52 84 79 70 72 50 48 97]
[75 67 59 41 89 18 67 85 46 12]
[39 31 16 33 91 0 14 46 41 27]
[65 18 73 77 63 69 42 42 90 49]
[90 49 8 19 47 60 77 64 29 93]]
==================================================
casted to tensor:
tensor([[46, 13, 8, 58, 43, 78, 52, 97, 29, 10],
[15, 51, 70, 14, 3, 16, 88, 29, 71, 58],
[86, 63, 96, 83, 2, 80, 56, 18, 63, 70],
[48, 88, 52, 84, 79, 70, 72, 50, 48, 97],
[75, 67, 59, 41, 89, 18, 67, 85, 46, 12],
[39, 31, 16, 33, 91, 0, 14, 46, 41, 27],
[65, 18, 73, 77, 63, 69, 42, 42, 90, 49],
[90, 49, 8, 19, 47, 60, 77, 64, 29, 93]], dtype=torch.int32)
Dimension, Shape
np.ndim == torch.dim() np.shape == torch.shape
In [None]:
1
2
3
4
5
dimension of as_ndarray, 2
dimension of as_tensor, 2
shape of as_ndarray, (8, 10)
shape of as_tensor, torch.Size([8, 10])
Broadcasting ndarray == Broadcasting Tensor
In [None]:
1
2
3
4
5
6
7
8
Broadcasting in ndarray
[[ 6 8]
[ 8 10]]
==================================================
Broadcasting in tensor
tensor([[ 6, 8],
[ 8, 10]])
Numpy axis
parameter == Torch dim
paratemer
In [None]:
1
2
3
4
5
6
7
8
9
10
11
12
dim=1 적용
tensor([2., 5.])
dim=0 적용
tensor([5., 7., 9.])
dim=-1 적용 (마지막 차원이 삭제되는 효과라 생각하면 된다.)
torch.return_types.max(
values=tensor([3., 6.]),
indices=tensor([2, 2]))
max values: tensor([3., 6.])
max indexes: tensor([2, 2])