stem/AI/Neural Networks/Activation Functions.md
Andy Pack c3ebefdd64 vault backup: 2023-12-22 17:58:38
Affected files:
.obsidian/workspace.json
Food/Plans & Recipes.canvas
Health/ADHD.md
Health/Alexithymia.md
Health/Anxiety.md
Health/BWS.md
Health/Worry Tree.md
History/Britain.md
Media/Listen.md
Media/Watch.md
Money/Assets/Asset.md
Money/Assets/CFD.md
Money/Assets/Derivative.md
Money/Assets/Financial Instruments.md
Money/Assets/Options.md
Money/Assets/Security.md
Money/Assets/Stocks.md
Money/Econ.md
Money/Equity.md
Money/Inflation.md
Money/Markets/Commodity.md
Money/Markets/Markets.md
Money/Markets/Types.md
Money/Prices.md
Money/To Learn.md
Politcs/British/Prime Ministers.md
Politcs/British/Tory.md
Politcs/Neoliberalism/Thatcher.md
Politcs/Now.md
Projects/Dev Options.md
STEM/AI/Neural Networks/Activation Functions.md
STEM/AI/Neural Networks/MLP/Back-Propagation.md
STEM/AI/Neural Networks/SLP/Least Mean Square.md
STEM/AI/Pattern Matching/Markov/Markov.md
STEM/Maths/Algebra.md
STEM/Maths/Tensor.md
STEM/Maths/Vector Operators.md
Want/Clothes.md
Want/House.md
Want/Want.md
Work/Board.canvas
2023-12-22 17:58:38 +00:00

1.5 KiB

tags
maths
  • Limits output values
  • Squashing function

Threshold

  • For binary functions
  • Not differentiable
    • Sharp rise
  • Heaviside function
  • Unipolar
    • 0 <-> +1
  • Bipolar
    • -1 <-> +1

threshold-activation

Sigmoid

  • Logistic function
  • Normalises
  • Introduces non-linearity
  • Alternative is tanh
    • -1 <-> +1
  • Easy to take derivative
\frac d {dx} \sigma(x)=
\frac d {dx} \left[ 
\frac 1 {1+e^{-x}}
\right]
=\sigma(x)\cdot(1-\sigma(x))$$

![sigmoid](../../img/sigmoid.png)

### Derivative

$$y_j(n)=\varphi_j(v_j(n))=
\frac 1 {1+e^{-v_j(n)}}$$
$$\frac{\partial y_j(n)}{\partial v_j(n)}=
\varphi_j'(v_j(n))=
\frac{e^{-v_j(n)}}{(1+e^{-v_j(n)})^2}=
y_j(n)(1-y_j(n))$$
- Nice derivative
- Max value of $\varphi_j'(v_j(n))$ occurs when $y_j(n)=0.5$
- Min value of 0 when $y_j=0$ or $1$
- Initial [weights](Weight%20Init.md) chosen so not saturated at 0 or 1

If $y=\frac u v$
Where $u$ and $v$ are differential functions

$$\frac{dy}{dx}=\frac d {dx}\left(\frac u v\right)$$

$$\frac{dy}{dx}=
\frac {v \frac d {dx}(u) - u\frac d {dx}(v)} {v^2}$$

# ReLu
Rectilinear
- For deep networks
- $y=max(0,x)$
- CNNs
	- Breaks associativity of successive [convolutions](../../Signal%20Proc/Convolution.md)
		- Critical for learning complex functions
	- Sometimes small scalar for negative
		- Leaky ReLu

![relu](../../img/relu.png)

# SoftMax
- Output is per-class vector of likelihoods
	- Should be normalised into probability vector

## AlexNet
$$f(x_i)=\frac{\text{exp}(x_i)}{\sum_{j=1}^{1000}\text{exp}(x_j)}$$