2023-05-26 18:29:17 +01:00
|
|
|
# LeNet
|
|
|
|
- 1990's
|
|
|
|
![[lenet-1989.png]]
|
|
|
|
- 1989
|
|
|
|
![[lenet-1998.png]]
|
|
|
|
- 1998
|
|
|
|
|
|
|
|
# AlexNet
|
|
|
|
2012
|
|
|
|
|
|
|
|
- [[Activation Functions#ReLu|ReLu]]
|
|
|
|
- Normalisation
|
|
|
|
|
|
|
|
![[alexnet.png]]
|
|
|
|
|
|
|
|
# VGG
|
|
|
|
2015
|
|
|
|
|
|
|
|
- 16 layers over AlexNet's 8
|
|
|
|
- Looking at vanishing gradient problem
|
|
|
|
- Xavier
|
|
|
|
- Similar kernel size throughout
|
|
|
|
- Gradual filter increase
|
|
|
|
|
|
|
|
![[vgg-spec.png]]
|
|
|
|
![[vgg-arch.png]]
|
|
|
|
|
|
|
|
# GoogLeNet
|
|
|
|
2015
|
|
|
|
|
|
|
|
- [[Inception Layer]]s
|
2023-05-27 00:50:46 +01:00
|
|
|
- Multiple [[Deep Learning#Loss Function|Loss]] Functions
|
2023-05-26 18:29:17 +01:00
|
|
|
|
|
|
|
![[googlenet.png]]
|
|
|
|
|
|
|
|
## [[Inception Layer]]
|
|
|
|
![[googlenet-inception.png]]
|
2023-05-27 00:50:46 +01:00
|
|
|
## Auxiliary [[Deep Learning#Loss Function|Loss]] Functions
|
2023-05-26 18:29:17 +01:00
|
|
|
- Two other SoftMax blocks
|
|
|
|
- Help train really deep network
|
|
|
|
- Vanishing gradient problem
|
|
|
|
|
|
|
|
![[googlenet-auxilliary-loss.png]]
|