stem/AI/Neural Networks/Transformers
andy 23991f92c9 vault backup: 2023-05-31 21:29:04
Affected files:
.obsidian/app.json
.obsidian/workspace-mobile.json
.obsidian/workspace.json
Events/🪣🪣🪣.md
STEM/AI/Neural Networks/CNN/FCN/README.md
STEM/AI/Neural Networks/CNN/GAN/README.md
STEM/AI/Neural Networks/CNN/README.md
STEM/AI/Neural Networks/MLP/README.md
STEM/AI/Neural Networks/README.md
STEM/AI/Neural Networks/RNN/README.md
STEM/AI/Neural Networks/SLP/README.md
STEM/AI/Neural Networks/Transformers/README.md
Untitled.canvas
2023-05-31 21:29:04 +01:00
..
Attention.md vault backup: 2023-05-31 17:33:05 2023-05-31 17:33:05 +01:00
LLM.md vault backup: 2023-05-26 06:37:13 2023-05-26 06:37:13 +01:00
README.md vault backup: 2023-05-31 21:29:04 2023-05-31 21:29:04 +01:00
Transformers.md vault backup: 2023-05-27 00:50:46 2023-05-27 00:50:46 +01:00

  • Attention
    • Weighting significance of parts of the input
      • Including recursive output
  • Similar to RNNs
    • Process sequential data
    • Translation & text summarisation
    • Differences
      • Process input all at once
    • Largely replaced LSTM and gated recurrent units (GRU) which had attention mechanics
  • No recurrent structure

!transformer-arch.png

Examples

  • BERT
    • Bidirectional Encoder Representations from Transformers
    • Google
  • Original GPT

transformers-explained-visually-part-1-overview-of-functionality

Architecture

Input

  • Byte-pair encoding tokeniser
  • Mapped via word embedding into vector
    • Positional information added

Encoder/Decoder

  • Similar to seq2seq models
  • Create internal representation
  • Encoder layers
    • Create encodings that contain information about which parts of input are relevant to each other
    • Subsequent encoder layers receive previous encoding layers output
  • Decoder layers
    • Takes encodings and does opposite
    • Uses incorporated textual information to produce output
    • Has attention to draw information from output of previous decoders before drawing from encoders
  • Both use attention
  • Both use MLP layers for additional processing of outputs
    • Contain residual connections & layer norm steps