2023-05-26 18:29:17 +01:00
|
|
|
# LeNet
|
|
|
|
- 1990's
|
2023-06-01 08:11:37 +01:00
|
|
|
![lenet-1989](../../../img/lenet-1989.png)
|
2023-05-26 18:29:17 +01:00
|
|
|
- 1989
|
2023-06-01 08:11:37 +01:00
|
|
|
![lenet-1998](../../../img/lenet-1998.png)
|
2023-05-26 18:29:17 +01:00
|
|
|
- 1998
|
|
|
|
|
|
|
|
# AlexNet
|
|
|
|
2012
|
|
|
|
|
2023-06-06 11:48:49 +01:00
|
|
|
- [ReLu](../Activation%20Functions.md#ReLu)
|
2023-05-26 18:29:17 +01:00
|
|
|
- Normalisation
|
|
|
|
|
2023-06-01 08:11:37 +01:00
|
|
|
![alexnet](../../../img/alexnet.png)
|
2023-05-26 18:29:17 +01:00
|
|
|
|
|
|
|
# VGG
|
|
|
|
2015
|
|
|
|
|
|
|
|
- 16 layers over AlexNet's 8
|
|
|
|
- Looking at vanishing gradient problem
|
|
|
|
- Xavier
|
|
|
|
- Similar kernel size throughout
|
|
|
|
- Gradual filter increase
|
|
|
|
|
2023-06-01 08:11:37 +01:00
|
|
|
![vgg-spec](../../../img/vgg-spec.png)
|
|
|
|
![vgg-arch](../../../img/vgg-arch.png)
|
2023-05-26 18:29:17 +01:00
|
|
|
|
|
|
|
# GoogLeNet
|
|
|
|
2015
|
|
|
|
|
2023-06-05 17:01:29 +01:00
|
|
|
- [Inception Layer](Inception%20Layer.md)s
|
2023-06-06 11:48:49 +01:00
|
|
|
- Multiple [Loss](../Deep%20Learning.md#Loss%20Function) Functions
|
2023-05-26 18:29:17 +01:00
|
|
|
|
2023-06-01 08:11:37 +01:00
|
|
|
![googlenet](../../../img/googlenet.png)
|
2023-05-26 18:29:17 +01:00
|
|
|
|
2023-06-02 17:17:29 +01:00
|
|
|
## [Inception Layer](Inception%20Layer.md)
|
2023-06-01 08:11:37 +01:00
|
|
|
![googlenet-inception](../../../img/googlenet-inception.png)
|
2023-06-06 11:48:49 +01:00
|
|
|
## Auxiliary [Loss](../Deep%20Learning.md#Loss%20Function) Functions
|
2023-05-26 18:29:17 +01:00
|
|
|
- Two other SoftMax blocks
|
|
|
|
- Help train really deep network
|
|
|
|
- Vanishing gradient problem
|
|
|
|
|
2023-06-01 08:11:37 +01:00
|
|
|
![googlenet-auxilliary-loss](../../../img/googlenet-auxilliary-loss.png)
|