In [None]:
1
Mounted at /content/drive
연결된 그래픽 카드와 CUDA 버전 확인하기
In [None]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Sun Apr 11 07:45:10 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.67 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 68C P8 11W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
In [None]:
1
2
3
4
5
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
torch, torchvision, torchtext, 버전 확인
In [28]:
1
2
3
torch version: 1.8.1+cu101
torchvision version: 0.9.1+cu101
torchtext version: 0.9.1
Basic 2. Tensor
Tensor vs. Ndarray
In [29]:
In [30]:
1
2
3
4
5
6
7
8
9
10
11
random base data
60 66 29 88 12 84 1 54 45
63 31 25 40 61 40 65 27 68
58 33 31 62 89 46 28 49 67
48 39 25 57 62 81 7 98 4
68 15 51 47 64 1 19 11 14
44 41 51 72 44 73 59 79 54
82 5 50 71 60 25 88 1 14
79 74 71 78 78 40 71 44 47
2 84 37 28 33 8 56 54 80
18 31 2 69 59 66 34 51 14
In [31]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
casted to ndarray:
[[60 66 29 88 12 84 1 54 45]
[63 31 25 40 61 40 65 27 68]
[58 33 31 62 89 46 28 49 67]
[48 39 25 57 62 81 7 98 4]
[68 15 51 47 64 1 19 11 14]
[44 41 51 72 44 73 59 79 54]
[82 5 50 71 60 25 88 1 14]
[79 74 71 78 78 40 71 44 47]
[ 2 84 37 28 33 8 56 54 80]
[18 31 2 69 59 66 34 51 14]]
==================================================
casted to tensor:
tensor([[60, 66, 29, 88, 12, 84, 1, 54, 45],
[63, 31, 25, 40, 61, 40, 65, 27, 68],
[58, 33, 31, 62, 89, 46, 28, 49, 67],
[48, 39, 25, 57, 62, 81, 7, 98, 4],
[68, 15, 51, 47, 64, 1, 19, 11, 14],
[44, 41, 51, 72, 44, 73, 59, 79, 54],
[82, 5, 50, 71, 60, 25, 88, 1, 14],
[79, 74, 71, 78, 78, 40, 71, 44, 47],
[ 2, 84, 37, 28, 33, 8, 56, 54, 80],
[18, 31, 2, 69, 59, 66, 34, 51, 14]], dtype=torch.int32)
Dimension, Shape
np.reshape == torch.view
In [32]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
shape of as_tensor, torch.Size([10, 9])
tensor([60, 66, 29, 88, 12, 84, 1, 54, 45], dtype=torch.int32)
tensor([63, 31, 25, 40, 61, 40, 65, 27, 68], dtype=torch.int32)
tensor([58, 33, 31, 62, 89, 46, 28, 49, 67], dtype=torch.int32)
tensor([48, 39, 25, 57, 62, 81, 7, 98, 4], dtype=torch.int32)
tensor([68, 15, 51, 47, 64, 1, 19, 11, 14], dtype=torch.int32)
tensor([44, 41, 51, 72, 44, 73, 59, 79, 54], dtype=torch.int32)
tensor([82, 5, 50, 71, 60, 25, 88, 1, 14], dtype=torch.int32)
tensor([79, 74, 71, 78, 78, 40, 71, 44, 47], dtype=torch.int32)
tensor([ 2, 84, 37, 28, 33, 8, 56, 54, 80], dtype=torch.int32)
tensor([18, 31, 2, 69, 59, 66, 34, 51, 14], dtype=torch.int32)
shape of reshaped_tensor, torch.Size([3, 2, 15])
tensor([[60, 66, 29, 88, 12, 84, 1, 54, 45, 63, 31, 25, 40, 61, 40],
[65, 27, 68, 58, 33, 31, 62, 89, 46, 28, 49, 67, 48, 39, 25]],
dtype=torch.int32)
tensor([[57, 62, 81, 7, 98, 4, 68, 15, 51, 47, 64, 1, 19, 11, 14],
[44, 41, 51, 72, 44, 73, 59, 79, 54, 82, 5, 50, 71, 60, 25]],
dtype=torch.int32)
tensor([[88, 1, 14, 79, 74, 71, 78, 78, 40, 71, 44, 47, 2, 84, 37],
[28, 33, 8, 56, 54, 80, 18, 31, 2, 69, 59, 66, 34, 51, 14]],
dtype=torch.int32)
np.new_axis == torch.unsqeeze
In [33]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
shape of as_tensor, torch.Size([10, 9])
tensor([60, 66, 29, 88, 12, 84, 1, 54, 45], dtype=torch.int32)
tensor([63, 31, 25, 40, 61, 40, 65, 27, 68], dtype=torch.int32)
tensor([58, 33, 31, 62, 89, 46, 28, 49, 67], dtype=torch.int32)
tensor([48, 39, 25, 57, 62, 81, 7, 98, 4], dtype=torch.int32)
tensor([68, 15, 51, 47, 64, 1, 19, 11, 14], dtype=torch.int32)
tensor([44, 41, 51, 72, 44, 73, 59, 79, 54], dtype=torch.int32)
tensor([82, 5, 50, 71, 60, 25, 88, 1, 14], dtype=torch.int32)
tensor([79, 74, 71, 78, 78, 40, 71, 44, 47], dtype=torch.int32)
tensor([ 2, 84, 37, 28, 33, 8, 56, 54, 80], dtype=torch.int32)
tensor([18, 31, 2, 69, 59, 66, 34, 51, 14], dtype=torch.int32)
shape of reshaped_tensor, torch.Size([10, 1, 9])
tensor([[60, 66, 29, 88, 12, 84, 1, 54, 45]], dtype=torch.int32)
tensor([[63, 31, 25, 40, 61, 40, 65, 27, 68]], dtype=torch.int32)
tensor([[58, 33, 31, 62, 89, 46, 28, 49, 67]], dtype=torch.int32)
tensor([[48, 39, 25, 57, 62, 81, 7, 98, 4]], dtype=torch.int32)
tensor([[68, 15, 51, 47, 64, 1, 19, 11, 14]], dtype=torch.int32)
tensor([[44, 41, 51, 72, 44, 73, 59, 79, 54]], dtype=torch.int32)
tensor([[82, 5, 50, 71, 60, 25, 88, 1, 14]], dtype=torch.int32)
tensor([[79, 74, 71, 78, 78, 40, 71, 44, 47]], dtype=torch.int32)
tensor([[ 2, 84, 37, 28, 33, 8, 56, 54, 80]], dtype=torch.int32)
tensor([[18, 31, 2, 69, 59, 66, 34, 51, 14]], dtype=torch.int32)
Concatenation , Stacking
np.concat([] , axis= ) == torch.cat([] , dim= )
In [34]:
1
2
3
4
5
6
7
8
9
concat with dim=0
tensor([1., 2.])
tensor([3., 4.])
tensor([5., 6.])
tensor([7., 8.])
concat with dim=1
tensor([1., 2., 5., 6.])
tensor([3., 4., 7., 8.])
torch.stack() : unsqeeze 작업을 자동으로 수행 가능.
In [None]:
1
2
3
4
tensor([[1., 4.],
[2., 5.],
[3., 6.]])
torch.Size([3, 2])
ones_like , zeros_like
행렬 shape을 참고하여 0이나 1로 초기화 시키는 함수, numpy 와 비슷하다.
In [None]:
1
2
3
4
5
tensor([[1., 1., 1.],
[1., 1., 1.]])
tensor([[0., 0., 0.],
[0., 0., 0.]])
In-place Operation (중요! ★)
C의 ++ 나, Python의 += 연산 같은 덮어쓰는 연산을 수행하며 그
기록 정보
를 유지하고 있는것이 특징이다. (이후 인공신경망 생성시에 역전파의 디버깅이 가능한 구조이다.)
In [None]:
1
2
3
4
5
6
7
8
9
tensor([[2., 4.],
[6., 8.]])
tensor([[1., 2.],
[3., 4.]])
tensor([[2., 4.],
[6., 8.]])
tensor([[2., 4.],
[6., 8.]])