andy
64276f270d
Affected files: .obsidian/graph.json .obsidian/workspace-mobile.json .obsidian/workspace.json STEM/AI/Literature.md STEM/AI/Neural Networks/MLP.md STEM/AI/Properties.md STEM/Quantum/Orbitals.md STEM/Quantum/Schrödinger.md STEM/Quantum/Wave Function.md STEM/Signal Proc/Convolution.md STEM/Signal Proc/Image/Image Processing.md STEM/img/hydrogen-electron-density.png STEM/img/hydrogen-wave-function.png STEM/img/orbitals-radius.png STEM/img/radial-equations.png STEM/img/radius-electron-density-wf.png STEM/img/wave-function-nodes.png STEM/img/wave-function-polar-segment.png STEM/img/wave-function-polar.png
787 B
787 B
- Feed-forward
- Single hidden layer can learn any function
- Universal approximation theorem
- Each hidden layer can operate as a different feature extraction layer
- Lots of weights to learn
- Back-Propagation is supervised
Universal Approximation Theory
A finite feed-forward MLP with 1 hidden layer can in theory approximate any mathematical function
- In practice not trainable with Back-Propagation
Weight Matrix
- Use matrix multiplication for layer output
- TLU is hard limiter !
o_1
too_4
must all be one to overcome -3.5 bias and force output to 1 !- Can generate a non-linear Decision Boundary