stem/AI/Neural Networks/Activation Functions.md

68 lines
1.4 KiB
Markdown
Raw Normal View History

- Limits output values
- Squashing function
# Threshold
- For binary functions
- Not differentiable
- Sharp rise
- *Heaviside function*
- Unipolar
- 0 <-> +1
- Bipolar
- -1 <-> +1
![[threshold-activation.png]]
# Sigmoid
- Logistic function
- Normalises
- Introduces non-linearity
- Alternative is $tanh$
- -1 <-> +1
- Easy to take derivative
$$\frac d {dx} \sigma(x)=
\frac d {dx} \left[
\frac 1 {1+e^{-x}}
\right]
=\sigma(x)\cdot(1-\sigma(x))$$
![[sigmoid.png]]
### Derivative
$$y_j(n)=\varphi_j(v_j(n))=
\frac 1 {1+e^{-v_j(n)}}$$
$$\frac{\partial y_j(n)}{\partial v_j(n)}=
\varphi_j'(v_j(n))=
\frac{e^{-v_j(n)}}{(1+e^{-v_j(n)})^2}=
y_j(n)(1-y_j(n))$$
- Nice derivative
- Max value of $\varphi_j'(v_j(n))$ occurs when $y_j(n)=0.5$
- Min value of 0 when $y_j=0$ or $1$
- Initial [[Weight Init|weights]] chosen so not saturated at 0 or 1
If $y=\frac u v$
Where $u$ and $v$ are differential functions
$$\frac{dy}{dx}=\frac d {dx}\left(\frac u v\right)$$
$$\frac{dy}{dx}=
\frac {v \frac d {dx}(u) - u\frac d {dx}(v)} {v^2}$$
# ReLu
Rectilinear
- For deep networks
- $y=max(0,x)$
vault backup: 2023-05-26 18:29:17 Affected files: .obsidian/graph.json .obsidian/workspace-mobile.json .obsidian/workspace.json STEM/AI/Neural Networks/Activation Functions.md STEM/AI/Neural Networks/CNN/CNN.md STEM/AI/Neural Networks/CNN/Convolutional Layer.md STEM/AI/Neural Networks/CNN/Examples.md STEM/AI/Neural Networks/CNN/GAN/CycleGAN.md STEM/AI/Neural Networks/CNN/GAN/DC-GAN.md STEM/AI/Neural Networks/CNN/GAN/GAN.md STEM/AI/Neural Networks/CNN/GAN/StackGAN.md STEM/AI/Neural Networks/CNN/GAN/cGAN.md STEM/AI/Neural Networks/CNN/Inception Layer.md STEM/AI/Neural Networks/CNN/Max Pooling.md STEM/AI/Neural Networks/CNN/Normalisation.md STEM/AI/Neural Networks/CV/Data Manipulations.md STEM/AI/Neural Networks/CV/Datasets.md STEM/AI/Neural Networks/CV/Filters.md STEM/AI/Neural Networks/CV/Layer Structure.md STEM/AI/Neural Networks/Weight Init.md STEM/img/alexnet.png STEM/img/cgan-example.png STEM/img/cgan.png STEM/img/cnn-cv-layer-arch.png STEM/img/cnn-descriptor.png STEM/img/cnn-normalisation.png STEM/img/code-vector-math-for-control-results.png STEM/img/cvmfc.png STEM/img/cyclegan-results.png STEM/img/cyclegan.png STEM/img/data-aug.png STEM/img/data-whitening.png STEM/img/dc-gan.png STEM/img/fine-tuning-freezing.png STEM/img/gabor.png STEM/img/gan-arch.png STEM/img/gan-arch2.png STEM/img/gan-results.png STEM/img/gan-training-discriminator.png STEM/img/gan-training-generator.png STEM/img/googlenet-auxilliary-loss.png STEM/img/googlenet-inception.png STEM/img/googlenet.png STEM/img/icv-pos-neg-examples.png STEM/img/icv-results.png STEM/img/inception-layer-arch.png STEM/img/inception-layer-effect.png STEM/img/lenet-1989.png STEM/img/lenet-1998.png STEM/img/max-pooling.png STEM/img/stackgan-results.png STEM/img/stackgan.png STEM/img/under-over-fitting.png STEM/img/vgg-arch.png STEM/img/vgg-spec.png STEM/img/word2vec.png
2023-05-26 18:29:17 +01:00
- CNNs
- Breaks associativity of successive convolutions
- Critical for learning complex functions
- Sometimes small scalar for negative
- Leaky ReLu
![[relu.png]]
vault backup: 2023-05-26 18:29:17 Affected files: .obsidian/graph.json .obsidian/workspace-mobile.json .obsidian/workspace.json STEM/AI/Neural Networks/Activation Functions.md STEM/AI/Neural Networks/CNN/CNN.md STEM/AI/Neural Networks/CNN/Convolutional Layer.md STEM/AI/Neural Networks/CNN/Examples.md STEM/AI/Neural Networks/CNN/GAN/CycleGAN.md STEM/AI/Neural Networks/CNN/GAN/DC-GAN.md STEM/AI/Neural Networks/CNN/GAN/GAN.md STEM/AI/Neural Networks/CNN/GAN/StackGAN.md STEM/AI/Neural Networks/CNN/GAN/cGAN.md STEM/AI/Neural Networks/CNN/Inception Layer.md STEM/AI/Neural Networks/CNN/Max Pooling.md STEM/AI/Neural Networks/CNN/Normalisation.md STEM/AI/Neural Networks/CV/Data Manipulations.md STEM/AI/Neural Networks/CV/Datasets.md STEM/AI/Neural Networks/CV/Filters.md STEM/AI/Neural Networks/CV/Layer Structure.md STEM/AI/Neural Networks/Weight Init.md STEM/img/alexnet.png STEM/img/cgan-example.png STEM/img/cgan.png STEM/img/cnn-cv-layer-arch.png STEM/img/cnn-descriptor.png STEM/img/cnn-normalisation.png STEM/img/code-vector-math-for-control-results.png STEM/img/cvmfc.png STEM/img/cyclegan-results.png STEM/img/cyclegan.png STEM/img/data-aug.png STEM/img/data-whitening.png STEM/img/dc-gan.png STEM/img/fine-tuning-freezing.png STEM/img/gabor.png STEM/img/gan-arch.png STEM/img/gan-arch2.png STEM/img/gan-results.png STEM/img/gan-training-discriminator.png STEM/img/gan-training-generator.png STEM/img/googlenet-auxilliary-loss.png STEM/img/googlenet-inception.png STEM/img/googlenet.png STEM/img/icv-pos-neg-examples.png STEM/img/icv-results.png STEM/img/inception-layer-arch.png STEM/img/inception-layer-effect.png STEM/img/lenet-1989.png STEM/img/lenet-1998.png STEM/img/max-pooling.png STEM/img/stackgan-results.png STEM/img/stackgan.png STEM/img/under-over-fitting.png STEM/img/vgg-arch.png STEM/img/vgg-spec.png STEM/img/word2vec.png
2023-05-26 18:29:17 +01:00
# SoftMax
- Output is per-class vector of likelihoods
- Should be normalised into probability vector
## AlexNet
$$f(x_i)=\frac{\text{exp}(x_i)}{\sum_{j=1}^{1000}\text{exp}(x_j)}$$