stem/AI/Neural Networks/CNN/Examples.md
andy 8f0b604256 vault backup: 2023-05-26 18:29:17
Affected files:
.obsidian/graph.json
.obsidian/workspace-mobile.json
.obsidian/workspace.json
STEM/AI/Neural Networks/Activation Functions.md
STEM/AI/Neural Networks/CNN/CNN.md
STEM/AI/Neural Networks/CNN/Convolutional Layer.md
STEM/AI/Neural Networks/CNN/Examples.md
STEM/AI/Neural Networks/CNN/GAN/CycleGAN.md
STEM/AI/Neural Networks/CNN/GAN/DC-GAN.md
STEM/AI/Neural Networks/CNN/GAN/GAN.md
STEM/AI/Neural Networks/CNN/GAN/StackGAN.md
STEM/AI/Neural Networks/CNN/GAN/cGAN.md
STEM/AI/Neural Networks/CNN/Inception Layer.md
STEM/AI/Neural Networks/CNN/Max Pooling.md
STEM/AI/Neural Networks/CNN/Normalisation.md
STEM/AI/Neural Networks/CV/Data Manipulations.md
STEM/AI/Neural Networks/CV/Datasets.md
STEM/AI/Neural Networks/CV/Filters.md
STEM/AI/Neural Networks/CV/Layer Structure.md
STEM/AI/Neural Networks/Weight Init.md
STEM/img/alexnet.png
STEM/img/cgan-example.png
STEM/img/cgan.png
STEM/img/cnn-cv-layer-arch.png
STEM/img/cnn-descriptor.png
STEM/img/cnn-normalisation.png
STEM/img/code-vector-math-for-control-results.png
STEM/img/cvmfc.png
STEM/img/cyclegan-results.png
STEM/img/cyclegan.png
STEM/img/data-aug.png
STEM/img/data-whitening.png
STEM/img/dc-gan.png
STEM/img/fine-tuning-freezing.png
STEM/img/gabor.png
STEM/img/gan-arch.png
STEM/img/gan-arch2.png
STEM/img/gan-results.png
STEM/img/gan-training-discriminator.png
STEM/img/gan-training-generator.png
STEM/img/googlenet-auxilliary-loss.png
STEM/img/googlenet-inception.png
STEM/img/googlenet.png
STEM/img/icv-pos-neg-examples.png
STEM/img/icv-results.png
STEM/img/inception-layer-arch.png
STEM/img/inception-layer-effect.png
STEM/img/lenet-1989.png
STEM/img/lenet-1998.png
STEM/img/max-pooling.png
STEM/img/stackgan-results.png
STEM/img/stackgan.png
STEM/img/under-over-fitting.png
STEM/img/vgg-arch.png
STEM/img/vgg-spec.png
STEM/img/word2vec.png
2023-05-26 18:29:17 +01:00

43 lines
641 B
Markdown

# LeNet
- 1990's
![[lenet-1989.png]]
- 1989
![[lenet-1998.png]]
- 1998
# AlexNet
2012
- [[Activation Functions#ReLu|ReLu]]
- Normalisation
![[alexnet.png]]
# VGG
2015
- 16 layers over AlexNet's 8
- Looking at vanishing gradient problem
- Xavier
- Similar kernel size throughout
- Gradual filter increase
![[vgg-spec.png]]
![[vgg-arch.png]]
# GoogLeNet
2015
- [[Inception Layer]]s
- Multiple Loss Functions
![[googlenet.png]]
## [[Inception Layer]]
![[googlenet-inception.png]]
## Auxiliary Loss Functions
- Two other SoftMax blocks
- Help train really deep network
- Vanishing gradient problem
![[googlenet-auxilliary-loss.png]]