vault backup: 2023-06-01 08:11:37

Affected files:
.obsidian/graph.json
.obsidian/workspace.json
Money/Assets/Derivative.md
STEM/AI/Neural Networks/CNN/Examples.md
STEM/AI/Neural Networks/Deep Learning.md
STEM/AI/Neural Networks/MLP/Decision Boundary.md
STEM/CS/Languages/dotNet.md
STEM/Semiconductors/Equations.md
Tattoo/Engineering.md
This commit is contained in:
andy 2023-06-01 08:11:37 +01:00
parent 236a5eac06
commit b30da1d29c
5 changed files with 24 additions and 25 deletions

View File

@ -1,8 +1,8 @@
# LeNet
- 1990's
![[lenet-1989.png]]
![lenet-1989](../../../img/lenet-1989.png)
- 1989
![[lenet-1998.png]]
![lenet-1998](../../../img/lenet-1998.png)
- 1998
# AlexNet
@ -11,7 +11,7 @@
- [[Activation Functions#ReLu|ReLu]]
- Normalisation
![[alexnet.png]]
![alexnet](../../../img/alexnet.png)
# VGG
2015
@ -22,8 +22,8 @@
- Similar kernel size throughout
- Gradual filter increase
![[vgg-spec.png]]
![[vgg-arch.png]]
![vgg-spec](../../../img/vgg-spec.png)
![vgg-arch](../../../img/vgg-arch.png)
# GoogLeNet
2015
@ -31,13 +31,13 @@
- [[Inception Layer]]s
- Multiple [[Deep Learning#Loss Function|Loss]] Functions
![[googlenet.png]]
![googlenet](../../../img/googlenet.png)
## [[Inception Layer]]
![[googlenet-inception.png]]
![googlenet-inception](../../../img/googlenet-inception.png)
## Auxiliary [[Deep Learning#Loss Function|Loss]] Functions
- Two other SoftMax blocks
- Help train really deep network
- Vanishing gradient problem
![[googlenet-auxilliary-loss.png]]
![googlenet-auxilliary-loss](../../../img/googlenet-auxilliary-loss.png)

View File

@ -1,17 +1,17 @@
![[deep-digit-classification.png]]
![deep-digit-classification](../../img/deep-digit-classification.png)
# Loss Function
Objective Function
- [[Back-Propagation]]
- [Back-Propagation](MLP/Back-Propagation.md)
- Difference between predicted and target outputs
![[deep-loss-function.png]]
![deep-loss-function](../../img/deep-loss-function.png)
- Test accuracy worse than train accuracy = overfitting
- [[MLP|Dense]] = [[MLP|fully connected]]
- Automates feature engineering
![[ml-dl.png]]
![ml-dl](../../img/ml-dl.png)
These are the two essential characteristics of how deep learning learns from data: the incremental, layer-by-layer way in which increasingly complex representations are developed, and the fact that these intermediate incremental representations are learned jointly, each layer being updated to follow both the representational needs of the layer above and the needs of the layer below. Together, these two properties have made deep learning vastly more successful than previous approaches to machine learning.
@ -32,16 +32,16 @@ Predict
Evaluate
# Data Structure
- [[Tensor]] flow = channels last
- [Tensor](../../Maths/Tensor.md) flow = channels last
- (samples, height, width, channels)
- Vector data
- 2D [[tensor]]s of shape (samples, features)
- 2D [tensors](../../Maths/Tensor.md) of shape (samples, features)
- Time series data or sequence data
- 3D [[tensor]]s of shape (samples, timesteps, features)
- 3D [tensors](../../Maths/Tensor.md) of shape (samples, timesteps, features)
- Images
- 4D [[tensor]]s of shape (samples, height, width, channels) or (samples, channels, height, Width)
- 4D [tensors](../../Maths/Tensor.md) of shape (samples, height, width, channels) or (samples, channels, height, Width)
- Video
- 5D [[tensor]]s of shape (samples, frames, height, width, channels) or (samples, frames, channels , height, width)
- 5D [tensors](../../Maths/Tensor.md) of shape (samples, frames, height, width, channels) or (samples, frames, channels , height, width)
![[photo-tensor.png]]
![[matrix-dot-product.png]]
![photo-tensor](../../img/photo-tensor.png)
![matrix-dot-product](../../img/matrix-dot-product.png)

View File

@ -1,4 +1,3 @@
![[hidden-neuron-decision.png]]
![[mlp-xor.png]]
![[mlp-xor-2.png]]
![hidden-neuron-decision](../../../img/hidden-neuron-decision.png)
![mlp-xor](../../../img/mlp-xor.png)
![mlp-xor-2](../../../img/mlp-xor-2.png)

View File

@ -30,4 +30,4 @@
- Portable executable (PE)
- DLL, EXE
![[cli-infrastructure.png]]
![cli-infrastructure](../../img/cli-infrastructure.png)

View File

@ -11,7 +11,7 @@ $$J=\sigma E$$
$$V_{bi} = \frac{kT}{q}ln(\frac{N_D N_A}{n_i^2})$$
- $V_{bi}$ = Built-in Potential
[[Doping]]
[Doping](Doping.md)
$$J=nev$$
- $n$ = Charge Density
- $e$ = Charge