Posts CSC321 스터디 진행 사항 (Toronto Univ. , Roger Grosse )
Post
Cancel

CSC321 스터디 진행 사항 (Toronto Univ. , Roger Grosse )

Course Study with CSC321

[2021.01.09]

Topic : Neural Network quick review to Gradient vanishing & exploding problem

Notes :

  • [https://drive.google.com/file/d/15-n3O7gaMyRTqq5-WC9hfms1Xtea2Jp/view?usp=sharing](https://drive.google.com/file/d/15-n3O7gaMyRTqq5-WC9hfms1Xtea2Jp/view?usp=sharing)

Links :

Next : 2021.01.16 9:30 PM KST


[2021.01.16]

Topic : Lecture 5: Multilayer Perceptrons (박희원) , Lecture 6: Backpropagation (이동재)

Notes :

Links :

Multilayer Perceptrons

Backpropagation

Next : 2021.01.23 9:00 PM KST


[2021.01.23]

Topic : Lecture 7: Optimization 1st part (이찬주)

Notes :

Links :

Next : 2021.01.30 9:00 PM KST


[2021.01.30]

Topic : Lecture 7: Optimization (2nd , 민채정)

Notes : 업로드 예정

Links :

Next : 2021.02.06 9:00 PM KST


[2021.02.06]

Topic : Lecture 9: Generalization (이동재)

Notes :

Links :

Next : 2021.02.20 9:00 PM KST


[2021.02.20]

Topic : Lecture 11: Convolutional Networks (박희원)

Notes :

Links :

Next : 2021.02.27 9:00 PM KST

1
2
3
4
1.두 데이터 셋 간단 소개 (MNIST , Caltech101)
2.Image NET 대회 간단 소개
3.Conv net 사이즈 구하는 방법 (표기 방법)
4.LeNET , AlexNET , GoogleNET , (LeNET , AlexNET 은 너무 깊게할 필요 없고 GoogleNET은 fully convolutional 이라는 의미가 뭔지)

[2021.02.27]

Topic : Lecture 12: Image Classification (이찬주) , CNN misc part (이동재)

Notes :

Links :

Next : 2021.03.06 9:00 PM KST

Google Net (이동재)

  • 1X1 Convolution
  • Inception module (different kernel size , naive & advanced)
  • global average pooling
  • auxilary classifier

VGG Net , Res Net (민채정)

  • important features of vgg net ( diff between 16 & 19 ?)
  • concept of skip connection from Res Net
  • R-CNN , Fast R-CNN , Faster R-CNN 은 이번 파트 X

[2021.03.06]

Topic : Lecture 12.3: GooLeNet (이동재)

Notes :

Links :

Next : 2021.03.13 9:00 PM KST

Will Cover

VGG Net : using only 3X3 kernel ? (what is factorizing colvolution filter ?)

VGG Net : how to deal with gradient vanish/exploding problem (pre-trained kernel initializing)

VGG Net : technique on how-to train/test dataset (scale jittering) <- 어려운 개념이니 간단하게만

Rest Net : what is residual learning? ( Shortcut-connection? Identity mapping?)

Rest Net : what features resnet team took from VGG? (common vs. diff)

Rest Net : BottleNeck Layer (only for models with layers>50)

Rest Net : other experiment with CIFAR dataset (going for 1000 layers)


[2021.03.13]

Topic : Lecture 12.4: VGG , ResNet (민채정,이동재)

Notes :

Links :

Next : 2021.03.20 9:00 PM KST

Will Cover

  • Object Classification VS. Object Detection ?
    • Bounding Box , mAP , IOU
  • What is R-CNN ?
    • SIFT , HOG
    • Selective search
  • How Berkeley team applied R-CNN ?
    • PASCAL VOL
    • fine-tuning
  • Limits of R-CNN & How SPPNet came up
    • dealing with fixed input size
    • how many crops/warps
  • 순서 변경 : 이동재 - 민채정 - 박희원 - 이찬주 - 김진원

[2021.03.20]

Topic : Lecture 13: Object Detection, R-CNN & SPPNET (Hayden)

Notes :

Links :

Next : 2021.03.27 9:00 PM KST

Will Cover

  • R-CNN : Background
    • Computer Vision , selective search , SIFT , HOG , DPM
  • R-CNN : Architecture
    • 3-modules
  • R-CNN : How to test ? (detect , forward)
    • NMS
  • R-CNN : How to evaluate?
    • mAP , different metrics
  • R-CNN : How to train?
    • different IOU threshold
    • Bbox regressor understanding
  • R-CNN : Limits

[2021.03.27]

Topic : Lecture 13.2: Object Detection : R-CNN details (Chanju, James)

Notes :

Links :

Next : 2021.04.03 9:00 PM KST

Will Cover

  • SPP Net :
    • What’s improved from R-CNN ? (idea, keywords)
    • SPP Net flow (rough) (compared with R-CNN)
    • SPP layer details (bin, BoW, how to calculate output)
    • Practical training (Single-size, Multi-size training)
    • Performance in fields (Classification, Detection)
    • SPP Net Limits
  • Fast R-CNN :
    • what’s improved from SPP Net ? (idea, keywords)
    • Fast R-CNN flow (rough) (compared with SPP Net)
    • training Fast R-CNN (multi-task loss function, Hierarchical Sampling)
    • test methods (truncated SVD)
    • Fast R-CNN Limits

Future models of object detection : SPPNet , Fast R-CNN , Faster R-CNN , YOLO v1


[2021.04.03]

Topic : Lecture 13.3: Object Detection : SPPNET overview & details (Chanju, James)

Notes :

Links :

SPP Net

Fast R-CNN

Next : 2021.04.10 9:00 PM KST

  • Object Detection : Fast R-CNN overview, details (Jaden , James)
  • Fast R-CNN fine-tuning practice

Will Cover


[2021.04.10]

Topic : Lecture 13.4: Object Detection : Fast R-CNN details, ResNet fine-tuning practice (James , Jaden)

Notes :

Links :

Fast R-CNN

Res Net fine-tuning code

Next : 2021.04.24 9:00 PM KST

  • Object Detection : Faster R-CNN overview,details (James)

Will Cover

  • 자주 쓰게 될 서브 패키지 및 객체들
  • 3가지 모델 작성 법
  • 학습&테스트 과정 및 설정

[2021.04.24]

Topic : Special Course 1.DNN_practice & keras overview from Colab (James)

Notes :

Links :

Next : 2021.05.01 9:00 PM KST

  • Inception module implementation from keras in Colab (Chloe)
  • how to build Inception module with keras using Functional API method?

[2021.05.01]

Topic : Special Course 2. Keras overview & implementation of Inception block on Colab (James)

Notes :

Links :

Next : 2021.05.08 9:00 PM KST

  • Object Detection: Faster R-CNN (Chloe, James)
  1. What’s improved? (or suggested?)
  • 키워드별로 개념만, 뒤에 세부내용이 별도로 나옴
  • RPN , region proposal networks ( kind of FCN?)
  • Pyramids of images VS. Pyramids of filters VS. Pyramids of Anchors
  1. Model architecture & Forward-pass (brief check)
  • 마찬가지로 간단히
  • how a single image passs through model
  1. All about RPN
  • 자세히
  • Inputs & Outputs
  • Anchor Box
    • what is Anchor box & what does translation-invariant means
    • how to refer anchor box to regression
  • Loss
    • what loss function is defined on RPN?
  • Train
    • how to train RPN?
  1. How RPN and Detector share feature maps?
  • alternating training?
  1. Implementation details
  • 가능한 정도만
  • used scales, anchor types

[2021.05.08]

Topic : Lecture 13.5: Object Detection Faster R-CNN part1 (Chloe, James)

Notes :

Links :

Next : 2021.05.15 9:00 PM KST

  • Object Detection: Faster R-CNN part2 (Hayden, James)
  • All about RPN
  • Train (how to train RPN?)
  • How RPN and Detector share feature maps?
    • 4-step alternating training
  • Implementation details
    • 가능한 정도만
    • used scales, anchor types

+ multibox approach (pyramids of filters)

+ understanding regression loss of RPN


스터디 RULE 수정

  • 월/화 : 순서 변경이 필요한 팀원의 경우 화요일 저녁 전까지 다른 팀원에게 요청.
  • 수: 해당 주 담당 팀원은 진행 정도 및 별도 준비가 필요한 부분을 James에게 전달.

[2021.05.15]

Topic : Lecture 13.5: Object Detection Faster R-CNN part2 (Hayden, James)

Notes :

Links :

Next : 2021.05.22 9:00 PM KST

  • Object Detection: YOLO v1 (Chanju, James)

[2021.05.23]

Topic : Programmers ML Dev-matching 참여 (전원)

Notes :

Links :

  • None

Next : 2021.05.29 9:00 PM KST

  • Object Detection: YOLO v1 (Chloe, Chanju, James)
  • What’s improved? (or suggested?)
    • object-detection as single-regression problem
    • three benefits over traditional models
  • Architecture & Computation flow
    • network design
    • how raw image pass-through model (checking in/out of every layer)
  • Train & Inference
    • understanding each term of sum-squared error
    • using λcoord , λnoobj parameters
  • Limits & Comparison to other previous models
    • limits : spatial constraint, small-object problem, coarse features, loss-balance
    • comparison : DPM , Deep MultiBox , OverFeat , MultiGrasp 제외

[2021.05.29]

Topic : Object Detection: YOLO v1 (Chloe, Chanju, James)

Notes :

Links :

Next : 2021.06.05 9:00 PM KST

  • Recurrent Neural Network (Hayden, James)
  • https://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/readings/L14%20Recurrent%20Neural%20Nets.pdf

  • 1.Introduction
    • Tasks predicting ‘sequences’
    • Neural Language Model to RNN
  • 2.Recurrent Neural Nets
    • unrolling network to understand like FFNN
    • 3 examples of how parameter setting result in RNN
  • 3.Backprop Through Time
    • View as MLP backprop with unrolled computation-graph
    • Comparing with MLP backprop
  • 4.Sequence Modeling (what tasks can RNN be applied)
    • Language Modeling
    • Neural Machine Translation
    • Learning to Execute Programs


[2021.06.05]

Topic : Recurrent Neural Networks (Hayden, James)

Notes :

Links :

Next : 2021.06.19 9:00 PM KST

  • Long Short Term Memory Networks (Jaden, James)
  • 1.Introduction
    • Long-Term Dependency (gradient vanishing/exploding)
    • introduction to 3 gates
  • 2.LSTM forward computation flow
    • what is calculated at each gate
    • summarized behavior table
  • 3.LSTM BPTT flow
    • what to update?
    • how cell-state is safe from GV,GE ?
  • 4.Quick LSTM example (Tensorflow)
    • Tensorflow Time-Series Tutorial


[2021.06.19]

Topic : Long Short Term Memory (Jaden, James)

Notes :

Links :

Next : 2021.06.26 9:00 PM KST

  • GRU part1(James)
  • LSTM review & QnA
  • 1.Introduction
    • background - complex structure of LSTM
    • introduction to 2 gates - Reset gate, Update gate
  • 2.GRU forward computation flow
    • what is calculated at each gate (how is it diff from LSTM?)
    • understanding flow as human language


[2021.06.26]

Topic : Gated Recurrent Units (James)

Notes :

Links :

Next : 2021.07.03 9:00 PM KST

  • GRU part 2(Chloe, James)


This post is licensed under CC BY 4.0 by the author.