andy
dcc57e2c85
Affected files: .obsidian/graph.json .obsidian/workspace-mobile.json .obsidian/workspace.json Gaming/Steam controllers.md History/Britain.md STEM/AI/Neural Networks/CNN/CNN.md STEM/AI/Neural Networks/CNN/FCN/FCN.md STEM/AI/Neural Networks/CNN/FCN/ResNet.md STEM/AI/Neural Networks/CV/Datasets.md STEM/AI/Neural Networks/Properties+Capabilities.md STEM/AI/Neural Networks/Transformers/Attention.md STEM/AI/Properties.md Tattoo/Engineering.md Tattoo/Sources.md Tattoo/img/snake-coil.png Untitled.canvas
1.5 KiB
1.5 KiB
Before 2010s
- Data hungry
- Need lots of training data
- Processing power
- Niche
- No-one cared/knew about CNNs
After
- Datasets#ImageNet
- 16m images, 1000 classes
- GPUs
- General processing GPUs
- CUDA
- NIPS/ECCV 2012
- Double digit % gain on Datasets#ImageNet accuracy
Full Connected
- Move from Convolutional Layer operations towards vector output
- Stochastic drop-out
- Sub-sample channels and only connect some to MLP layers
As a Descriptor
- Most powerful as a deeply learned feature extractor
- MLP classifier at the end isn't fantastic
- Use SVM to classify prior to penultimate layer
Finetuning
- Observations
- Most CNNs have similar weights in Convolutional Layer
- Most useful CNNs have several Convolutional Layer
- Many weights
- Lots of training data
- Training data is hard to get
- Labelling
- Reuse weights from other network
- Freeze weights in first 3-5 Convolutional Layer
- Learning rate = 0
- Randomly initialise remaining layers
- Continue with existing weights
Training
- Validation & training Deep Learning#Loss Function
- Early
- Under-fitting
- Training not representative
- Later
- Overfitting
- V.Deep Learning#Loss Function can help adjust learning rate
- Or indicate when to stop training