andy
5a592c8c7c
Affected files: .obsidian/graph.json .obsidian/workspace-mobile.json .obsidian/workspace.json STEM/AI/Ethics.md STEM/AI/Neural Networks/Activation Functions.md STEM/AI/Neural Networks/CNN/CNN.md STEM/AI/Neural Networks/Deep Learning.md STEM/AI/Neural Networks/MLP/Back-Propagation.md STEM/AI/Neural Networks/MLP/MLP.md STEM/AI/Neural Networks/Neural Networks.md STEM/AI/Neural Networks/Properties+Capabilities.md STEM/AI/Neural Networks/RNN/LSTM.md STEM/AI/Neural Networks/RNN/RNN.md STEM/AI/Neural Networks/RNN/VQA.md STEM/AI/Neural Networks/SLP/SLP.md STEM/AI/Neural Networks/Training.md STEM/AI/Neural Networks/Transformers/Attention.md STEM/AI/Neural Networks/Transformers/LLM.md STEM/AI/Neural Networks/Transformers/Transformers.md STEM/Signal Proc/System Classes.md STEM/img/back-prop-equations.png STEM/img/back-prop-weight-changes.png STEM/img/back-prop1.png STEM/img/back-prop2.png STEM/img/cnn+lstm.png STEM/img/deep-digit-classification.png STEM/img/deep-loss-function.png STEM/img/llm-family-tree.png STEM/img/lstm-slp.png STEM/img/lstm.png STEM/img/matrix-dot-product.png STEM/img/ml-dl.png STEM/img/photo-tensor.png STEM/img/relu.png STEM/img/rnn-input.png STEM/img/rnn-recurrence.png STEM/img/slp-arch.png STEM/img/threshold-activation.png STEM/img/transformer-arch.png STEM/img/vqa-block.png
39 lines
1.4 KiB
Markdown
39 lines
1.4 KiB
Markdown
- [[Attention|Self-attention]]
|
|
- Weighting significance of parts of the input
|
|
- Including recursive output
|
|
- Similar to [[RNN]]s
|
|
- Process sequential data
|
|
- Translation & text summarisation
|
|
- Differences
|
|
- Process input all at once
|
|
- Largely replaced [[LSTM]] and gated recurrent units (GRU) which had attention mechanics
|
|
- No recurrent structure
|
|
|
|
![[transformer-arch.png]]
|
|
|
|
## Examples
|
|
- BERT
|
|
- Bidirectional Encoder Representations from Transformers
|
|
- Google
|
|
- Original GPT
|
|
|
|
[transformers-explained-visually-part-1-overview-of-functionality](https://towardsdatascience.com/transformers-explained-visually-part-1-overview-of-functionality-95a6dd460452)
|
|
# Architecture
|
|
## Input
|
|
- Byte-pair encoding tokeniser
|
|
- Mapped via word embedding into vector
|
|
- Positional information added
|
|
|
|
## Encoder/Decoder
|
|
- Similar to seq2seq models
|
|
- Create internal representation
|
|
- Encoder layers
|
|
- Create encodings that contain information about which parts of input are relevant to each other
|
|
- Subsequent encoder layers receive previous encoding layers output
|
|
- Decoder layers
|
|
- Takes encodings and does opposite
|
|
- Uses incorporated textual information to produce output
|
|
- Has attention to draw information from output of previous decoders before drawing from encoders
|
|
- Both use [[attention]]
|
|
- Both use dense layers for additional processing of outputs
|
|
- Contain residual connections & layer norm steps |