stem/AI/Neural Networks/Transformers
andy 4cc2e79866 vault backup: 2023-05-31 22:21:56
Affected files:
.obsidian/global-search.json
.obsidian/workspace.json
Health/Alexithymia.md
Health/BWS.md
STEM/AI/Neural Networks/Activation Functions.md
STEM/AI/Neural Networks/Architectures.md
STEM/AI/Neural Networks/CNN/CNN.md
STEM/AI/Neural Networks/MLP/Back-Propagation.md
STEM/AI/Neural Networks/Transformers/Attention.md
STEM/CS/Calling Conventions.md
STEM/CS/Languages/Assembly.md
2023-05-31 22:21:56 +01:00
..
Attention.md vault backup: 2023-05-31 22:21:56 2023-05-31 22:21:56 +01:00
LLM.md vault backup: 2023-05-26 06:37:13 2023-05-26 06:37:13 +01:00
README.md vault backup: 2023-05-31 21:29:04 2023-05-31 21:29:04 +01:00
Transformers.md vault backup: 2023-05-27 00:50:46 2023-05-27 00:50:46 +01:00

  • Attention
    • Weighting significance of parts of the input
      • Including recursive output
  • Similar to RNNs
    • Process sequential data
    • Translation & text summarisation
    • Differences
      • Process input all at once
    • Largely replaced LSTM and gated recurrent units (GRU) which had attention mechanics
  • No recurrent structure

!transformer-arch.png

Examples

  • BERT
    • Bidirectional Encoder Representations from Transformers
    • Google
  • Original GPT

transformers-explained-visually-part-1-overview-of-functionality

Architecture

Input

  • Byte-pair encoding tokeniser
  • Mapped via word embedding into vector
    • Positional information added

Encoder/Decoder

  • Similar to seq2seq models
  • Create internal representation
  • Encoder layers
    • Create encodings that contain information about which parts of input are relevant to each other
    • Subsequent encoder layers receive previous encoding layers output
  • Decoder layers
    • Takes encodings and does opposite
    • Uses incorporated textual information to produce output
    • Has attention to draw information from output of previous decoders before drawing from encoders
  • Both use attention
  • Both use MLP layers for additional processing of outputs
    • Contain residual connections & layer norm steps