stem/AI/Neural Networks/CNN
andy 4cc2e79866 vault backup: 2023-05-31 22:21:56
Affected files:
.obsidian/global-search.json
.obsidian/workspace.json
Health/Alexithymia.md
Health/BWS.md
STEM/AI/Neural Networks/Activation Functions.md
STEM/AI/Neural Networks/Architectures.md
STEM/AI/Neural Networks/CNN/CNN.md
STEM/AI/Neural Networks/MLP/Back-Propagation.md
STEM/AI/Neural Networks/Transformers/Attention.md
STEM/CS/Calling Conventions.md
STEM/CS/Languages/Assembly.md
2023-05-31 22:21:56 +01:00
..
FCN vault backup: 2023-05-31 21:29:04 2023-05-31 21:29:04 +01:00
GAN vault backup: 2023-05-31 21:29:04 2023-05-31 21:29:04 +01:00
CNN.md vault backup: 2023-05-31 22:21:56 2023-05-31 22:21:56 +01:00
Convolutional Layer.md vault backup: 2023-05-26 18:29:17 2023-05-26 18:29:17 +01:00
Examples.md vault backup: 2023-05-27 00:50:46 2023-05-27 00:50:46 +01:00
Inception Layer.md vault backup: 2023-05-26 18:29:17 2023-05-26 18:29:17 +01:00
Interpretation.md vault backup: 2023-05-27 22:17:56 2023-05-27 22:17:56 +01:00
Max Pooling.md vault backup: 2023-05-26 18:29:17 2023-05-26 18:29:17 +01:00
Normalisation.md vault backup: 2023-05-26 18:29:17 2023-05-26 18:29:17 +01:00
README.md vault backup: 2023-05-31 21:29:04 2023-05-31 21:29:04 +01:00
UpConv.md vault backup: 2023-05-27 23:02:51 2023-05-27 23:02:51 +01:00

Before 2010s

  • Data hungry
    • Need lots of training data
  • Processing power
  • Niche
    • No-one cared/knew about CNNs

After

  • ImageNet
    • 16m images, 1000 classes
  • GPUs
    • General processing GPUs
    • CUDA
  • NIPS/ECCV 2012
    • Double digit % gain on ImageNet accuracy

Full Connected

MLP

  • Move from Convolutional Layer operations towards vector output
  • Stochastic drop-out
    • Sub-sample channels and only connect some to MLP layers

As a Descriptor

  • Most powerful as a deeply learned feature extractor
  • MLP classifier at the end isn't fantastic
    • Use SVM to classify prior to penultimate layer

!cnn-descriptor.png

Finetuning

  • Observations
  • Reuse weights from other network
  • Freeze weights in first 3-5 Convolutional Layer
    • Learning rate = 0
    • Randomly initialise remaining layers
    • Continue with existing weights

!fine-tuning-freezing.png

Training

!under-over-fitting.png