2023-05-23 06:59:49 +01:00
|
|
|
- Feed-forward
|
|
|
|
- Single hidden layer can learn any function
|
|
|
|
- Universal approximation theorem
|
|
|
|
- Each hidden layer can operate as a different feature extraction layer
|
|
|
|
- Lots of weights to learn
|
2023-05-23 09:11:59 +01:00
|
|
|
- [[Back-Propagation]] is supervised
|
2023-05-23 06:59:49 +01:00
|
|
|
|
|
|
|
![[mlp-arch.png]]
|
|
|
|
|
|
|
|
# Universal Approximation Theory
|
|
|
|
A finite feed-forward MLP with 1 hidden layer can in theory approximate any mathematical function
|
2023-05-23 09:11:59 +01:00
|
|
|
- In practice not trainable with [[Back-Propagation|BP]]
|
2023-05-23 06:59:49 +01:00
|
|
|
|
|
|
|
![[activation-function.png]]
|
|
|
|
![[mlp-arch-diagram.png]]
|