Andy Pack
c3ebefdd64
Affected files: .obsidian/workspace.json Food/Plans & Recipes.canvas Health/ADHD.md Health/Alexithymia.md Health/Anxiety.md Health/BWS.md Health/Worry Tree.md History/Britain.md Media/Listen.md Media/Watch.md Money/Assets/Asset.md Money/Assets/CFD.md Money/Assets/Derivative.md Money/Assets/Financial Instruments.md Money/Assets/Options.md Money/Assets/Security.md Money/Assets/Stocks.md Money/Econ.md Money/Equity.md Money/Inflation.md Money/Markets/Commodity.md Money/Markets/Markets.md Money/Markets/Types.md Money/Prices.md Money/To Learn.md Politcs/British/Prime Ministers.md Politcs/British/Tory.md Politcs/Neoliberalism/Thatcher.md Politcs/Now.md Projects/Dev Options.md STEM/AI/Neural Networks/Activation Functions.md STEM/AI/Neural Networks/MLP/Back-Propagation.md STEM/AI/Neural Networks/SLP/Least Mean Square.md STEM/AI/Pattern Matching/Markov/Markov.md STEM/Maths/Algebra.md STEM/Maths/Tensor.md STEM/Maths/Vector Operators.md Want/Clothes.md Want/House.md Want/Want.md Work/Board.canvas |
||
---|---|---|
.. | ||
Back-Propagation.md | ||
Decision Boundary.md | ||
MLP.md | ||
README.md |
tags | |
---|---|
|
- Feedforward
- Single hidden layer can learn any function
- Universal approximation theorem
- Each hidden layer can operate as a different feature extraction layer
- Lots of weights to learn
- Back-Propagation is supervised
Universal Approximation Theory
A finite feedforward MLP with 1 hidden layer can in theory approximate any mathematical function
- In practice not trainable with BP
Weight Matrix
- Use matrix multiplication for layer output
- TLU is hard limiter
o_1
too_4
must all be one to overcome -3.5 bias and force output to 1- Can generate a non-linear decision boundary