In [None]:
1
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
연결된 그래픽 카드와 CUDA 버전 확인하기
In [None]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Mon Apr 19 14:37:53 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.67 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |
| N/A 38C P0 32W / 250W | 899MiB / 16280MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
In [None]:
1
2
3
4
5
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
torch, torchvision, torchtext, 버전 확인
In [None]:
1
2
3
torch version: 1.8.1+cu101
torchvision version: 0.9.1+cu101
torchtext version: 0.9.1
Basic 3. Autograd
Torch의 기본 자료형인 Tensor의 경우 연산이 이루어질때 그 기록 또한 누적되어 자동미분이 가능한 구조로 계산된다.
In [None]:
1
2
3
4
5
6
x: tensor(1.5000, requires_grad=True)
y: tensor(3.5000, requires_grad=True)
z: tensor(13.7500, grad_fn=<AddBackward0>)
x에 계산된 기울기: 1.0
y에 계산된 기울기: 7.0
이때, Tensor의 .grad 속성은 처음 requires_grad=True 로 설정한 Leaf Tensor에 대해서만 가능하다. 연산 중간의 텐서에서는 참조 불가
In [None]:
1
2
3
4
5
6
7
8
9
10
x: tensor(1.5000, requires_grad=True)
y: tensor(6., grad_fn=<AddBackward0>)
z: tensor(36., grad_fn=<PowBackward0>)
x에 계산된 기울기: 24.0
y에 계산된 기울기: None
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:15: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
from ipykernel import kernelapp as app
이러한 Autograd 특성으로부터 선형회귀 모델을 생성해보자
In [None]:
1
2
3
4
5
6
7
8
9
10
11
12
13
1/10000 : Cost=14.351768493652344
1001/10000 : Cost=0.17560093104839325
2001/10000 : Cost=0.002341908635571599
3001/10000 : Cost=3.5410659620538354e-05
4001/10000 : Cost=6.201415203577199e-07
5001/10000 : Cost=1.2345105382394195e-08
6001/10000 : Cost=1.9163828302026786e-09
7001/10000 : Cost=1.9163828302026786e-09
8001/10000 : Cost=1.9163828302026786e-09
9001/10000 : Cost=1.9163828302026786e-09
=================================================================
목표치: w1=(2,2,2) , b1=1
학습결과: w1=[2.0, 1.9999690055847168, 1.999973177909851] , b1=0.9999855756759644
어느정도 학습이 됐다면 테스트도 해보자
In [None]:
1
2
예상 정답: [15.0]
회귀식 예측 값: tensor([[14.9998]], device='cuda:0')