andy
acb7dc429e
Affected files: .obsidian/graph.json .obsidian/workspace-mobile.json .obsidian/workspace.json STEM/AI/Neural Networks/Architectures.md STEM/AI/Neural Networks/CNN/CNN.md STEM/AI/Neural Networks/CNN/Examples.md STEM/AI/Neural Networks/CNN/FCN/FCN.md STEM/AI/Neural Networks/CNN/GAN/DC-GAN.md STEM/AI/Neural Networks/CNN/GAN/GAN.md STEM/AI/Neural Networks/CNN/Interpretation.md STEM/AI/Neural Networks/Deep Learning.md STEM/AI/Neural Networks/MLP/MLP.md STEM/AI/Neural Networks/SLP/Least Mean Square.md STEM/AI/Neural Networks/Transformers/Attention.md STEM/AI/Neural Networks/Transformers/Transformers.md STEM/img/feedforward.png STEM/img/multilayerfeedforward.png STEM/img/recurrent.png STEM/img/recurrentwithhn.png
1.4 KiB
1.4 KiB
- Attention
- Weighting significance of parts of the input
- Including recursive output
- Weighting significance of parts of the input
- Similar to RNNs
- Process sequential data
- Translation & text summarisation
- Differences
- Process input all at once
- Largely replaced LSTM and gated recurrent units (GRU) which had attention mechanics
- No recurrent structure
Examples
- BERT
- Bidirectional Encoder Representations from Transformers
- Original GPT
transformers-explained-visually-part-1-overview-of-functionality
Architecture
Input
- Byte-pair encoding tokeniser
- Mapped via word embedding into vector
- Positional information added
Encoder/Decoder
- Similar to seq2seq models
- Create internal representation
- Encoder layers
- Create encodings that contain information about which parts of input are relevant to each other
- Subsequent encoder layers receive previous encoding layers output
- Decoder layers
- Takes encodings and does opposite
- Uses incorporated textual information to produce output
- Has attention to draw information from output of previous decoders before drawing from encoders
- Both use attention
- Both use MLP layers for additional processing of outputs
- Contain residual connections & layer norm steps