stem/AI/Neural Networks/CNN/CNN.md
andy 25f73797e3 vault backup: 2023-05-27 23:02:51
Affected files:
.obsidian/graph.json
.obsidian/workspace.json
STEM/AI/Neural Networks/Activation Functions.md
STEM/AI/Neural Networks/CNN/CNN.md
STEM/AI/Neural Networks/CNN/FCN/FCN.md
STEM/AI/Neural Networks/CNN/FCN/FlowNet.md
STEM/AI/Neural Networks/CNN/FCN/Highway Networks.md
STEM/AI/Neural Networks/CNN/FCN/ResNet.md
STEM/AI/Neural Networks/CNN/FCN/Skip Connections.md
STEM/AI/Neural Networks/CNN/GAN/DC-GAN.md
STEM/AI/Neural Networks/CNN/GAN/GAN.md
STEM/AI/Neural Networks/CNN/UpConv.md
STEM/img/highway-vs-residual.png
STEM/img/imagenet-error.png
STEM/img/resnet-arch.png
STEM/img/resnet-arch2.png
STEM/img/skip-connections 1.png
STEM/img/upconv-matrix-result.png
STEM/img/upconv-matrix-transposed-result.png
STEM/img/upconv-matrix.png
STEM/img/upconv-transposed-matrix.png
STEM/img/upconv.png
2023-05-27 23:02:51 +01:00

54 lines
1.4 KiB
Markdown

## Before 2010s
- Data hungry
- Need lots of training data
- Processing power
- Niche
- No-one cared/knew about CNNs
## After
- ImageNet
- 16m images, 1000 classes
- GPUs
- General processing GPUs
- CUDA
- NIPS/ECCV 2012
- Double digit % gain on ImageNet accuracy
# Full Connected
[[MLP|Dense]]
- Move from [[Convolutional Layer|convolutional]] operations towards vector output
- Stochastic drop-out
- Sub-sample channels and only connect some to [[MLP|dense]] layers
# As a Descriptor
- Most powerful as a deeply learned feature extractor
- [[MLP|Dense]] classifier at the end isn't fantastic
- Use SVM to classify prior to penultimate layer
![[cnn-descriptor.png]]
# Finetuning
- Observations
- Most CNNs have similar weights in [[Convolutional Layer|conv1]]
- Most useful CNNs have several [[Convolutional Layer|conv layers]]
- Many weights
- Lots of training data
- Training data is hard to get
- Labelling
- Reuse weights from other network
- Freeze weights in first 3-5 [[Convolutional Layer|conv layers]]
- Learning rate = 0
- Randomly initialise remaining layers
- Continue with existing weights
![[fine-tuning-freezing.png]]
# Training
- Validation & training [[Deep Learning#Loss Function|loss]]
- Early
- Under-fitting
- Training not representative
- Later
- Overfitting
- V.[[Deep Learning#Loss Function|loss]] can help adjust learning rate
- Or indicate when to stop training
![[under-over-fitting.png]]