In [None]:
1
Mounted at /content/drive
연결된 그래픽 카드와 CUDA 버전 확인하기
In [None]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Thu Apr 22 07:05:12 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.67 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 34C P8 27W / 149W | 0MiB / 11441MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
In [None]:
1
2
3
4
5
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
torch, torchvision, torchtext, 버전 확인
In [None]:
1
2
3
torch version: 1.8.1+cu101
torchvision version: 0.9.1+cu101
torchtext version: 0.9.1
Basic 5. 3-ways to build model from nn.Module
- Sequential API (easy, high-level)
- Functional API (general way)
- Subclassing API (pytorch standard)
In [1]:
- Sequential API 방식. 간단한 모델을 설계하기에 최적
In [None]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
구성확인(default layer name):
[Sequential(
(0): Linear(in_features=20, out_features=30, bias=True)
(1): ReLU()
(2): Linear(in_features=30, out_features=10, bias=True)
(3): Softmax(dim=None)
), Linear(in_features=20, out_features=30, bias=True), ReLU(), Linear(in_features=30, out_features=10, bias=True), Softmax(dim=None)]
=================================================================
구성확인(defined layer name):
[Sequential(
(hidden1): Linear(in_features=20, out_features=30, bias=True)
(activation1): ReLU()
(hidden2): Linear(in_features=30, out_features=10, bias=True)
(activation2): Softmax(dim=None)
), Linear(in_features=20, out_features=30, bias=True), ReLU(), Linear(in_features=30, out_features=10, bias=True), Softmax(dim=None)]
접근 또한 가능, model2.hidden1
Linear(in_features=20, out_features=30, bias=True)
- Functional API 방식. Sequential 방식으로는 설계가 까다로운 경우 활용. 가장 일반적인 방법
- keras에서 이런 functional api 방식을 지원한다. Pytorch에서는 다음 방법인 subclassing api를 많이 씀
In [None]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Model: "model_4"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) [(None, 64)] 0
__________________________________________________________________________________________________
input_3 (InputLayer) [(None, 28)] 0
__________________________________________________________________________________________________
dense_9 (Dense) (None, 64) 4160 input_4[0][0]
__________________________________________________________________________________________________
dense_7 (Dense) (None, 16) 464 input_3[0][0]
__________________________________________________________________________________________________
dense_10 (Dense) (None, 32) 2080 dense_9[0][0]
__________________________________________________________________________________________________
dense_8 (Dense) (None, 8) 136 dense_7[0][0]
__________________________________________________________________________________________________
dense_11 (Dense) (None, 8) 264 dense_10[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 16) 0 dense_8[0][0]
dense_11[0][0]
__________________________________________________________________________________________________
dense_12 (Dense) (None, 2) 34 concatenate_1[0][0]
__________________________________________________________________________________________________
dense_13 (Dense) (None, 1) 3 dense_12[0][0]
==================================================================================================
Total params: 7,141
Trainable params: 7,141
Non-trainable params: 0
__________________________________________________________________________________________________
None
- Subclassing API 방식. Pytorch 에서는 가장 standard 방식이며 객체형 프로그래밍 방식으로 구현한다.
In [21]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
my_CNN(
(conv1): Sequential(
(conv_1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_1): ReLU()
(maxpool_1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(conv2): Sequential(
(conv_1): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_1): ReLU()
(maxpool_1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(fc1): Linear(in_features=3136, out_features=10, bias=True)
(classifier): Softmax(dim=1)
)