2023-12-22 16:39:03 +00:00
|
|
|
---
|
|
|
|
tags:
|
|
|
|
- ai
|
|
|
|
---
|
2023-05-26 18:29:17 +01:00
|
|
|
## Before 2010s
|
|
|
|
- Data hungry
|
|
|
|
- Need lots of training data
|
|
|
|
- Processing power
|
|
|
|
- Niche
|
|
|
|
- No-one cared/knew about CNNs
|
|
|
|
## After
|
2023-05-31 22:21:56 +01:00
|
|
|
- [ImageNet](../CV/Datasets.md#ImageNet)
|
2023-05-26 18:29:17 +01:00
|
|
|
- 16m images, 1000 classes
|
|
|
|
- GPUs
|
|
|
|
- General processing GPUs
|
|
|
|
- CUDA
|
|
|
|
- NIPS/ECCV 2012
|
2023-05-31 22:21:56 +01:00
|
|
|
- Double digit % gain on [ImageNet](../CV/Datasets.md#ImageNet) accuracy
|
2023-05-26 18:29:17 +01:00
|
|
|
|
|
|
|
# Full Connected
|
2023-05-31 22:51:45 +01:00
|
|
|
[Dense](../MLP/MLP.md)
|
|
|
|
- Move from [convolutional](Convolutional%20Layer.md) operations towards vector output
|
2023-05-26 18:29:17 +01:00
|
|
|
- Stochastic drop-out
|
2023-05-31 22:51:45 +01:00
|
|
|
- Sub-sample channels and only connect some to [dense](../MLP/MLP.md) layers
|
2023-05-26 18:29:17 +01:00
|
|
|
|
|
|
|
# As a Descriptor
|
|
|
|
- Most powerful as a deeply learned feature extractor
|
2023-05-31 22:51:45 +01:00
|
|
|
- [Dense](../MLP/MLP.md) classifier at the end isn't fantastic
|
2023-05-26 18:29:17 +01:00
|
|
|
- Use SVM to classify prior to penultimate layer
|
|
|
|
|
2023-05-31 22:51:45 +01:00
|
|
|
![cnn-descriptor](../../../img/cnn-descriptor.png)
|
2023-05-26 18:29:17 +01:00
|
|
|
|
|
|
|
# Finetuning
|
|
|
|
- Observations
|
2023-05-31 22:51:45 +01:00
|
|
|
- Most CNNs have similar weights in [conv1](Convolutional%20Layer.md)
|
|
|
|
- Most useful CNNs have several [conv layers](Convolutional%20Layer.md)
|
2023-05-26 18:29:17 +01:00
|
|
|
- Many weights
|
|
|
|
- Lots of training data
|
|
|
|
- Training data is hard to get
|
|
|
|
- Labelling
|
|
|
|
- Reuse weights from other network
|
2023-05-31 22:51:45 +01:00
|
|
|
- Freeze weights in first 3-5 [conv layers](Convolutional%20Layer.md)
|
2023-05-26 18:29:17 +01:00
|
|
|
- Learning rate = 0
|
|
|
|
- Randomly initialise remaining layers
|
|
|
|
- Continue with existing weights
|
|
|
|
|
2023-05-31 22:51:45 +01:00
|
|
|
![fine-tuning-freezing](../../../img/fine-tuning-freezing.png)
|
2023-05-26 18:29:17 +01:00
|
|
|
# Training
|
2023-06-04 22:30:39 +01:00
|
|
|
- Validation & training [loss](../Deep%20Learning.md#Loss%20Function)
|
2023-05-26 18:29:17 +01:00
|
|
|
- Early
|
|
|
|
- Under-fitting
|
|
|
|
- Training not representative
|
|
|
|
- Later
|
|
|
|
- Overfitting
|
2023-06-04 22:30:39 +01:00
|
|
|
- V.[loss](../Deep%20Learning.md#Loss%20Function) can help adjust learning rate
|
2023-05-26 18:29:17 +01:00
|
|
|
- Or indicate when to stop training
|
|
|
|
|
2023-05-31 22:51:45 +01:00
|
|
|
![under-over-fitting](../../../img/under-over-fitting.png)
|