stem/AI/Neural Networks/Transformers
andy 7bc4dffd8b vault backup: 2023-06-06 11:48:49
Affected files:
STEM/AI/Neural Networks/CNN/Examples.md
STEM/AI/Neural Networks/CNN/FCN/FCN.md
STEM/AI/Neural Networks/CNN/FCN/ResNet.md
STEM/AI/Neural Networks/CNN/FCN/Skip Connections.md
STEM/AI/Neural Networks/CNN/GAN/DC-GAN.md
STEM/AI/Neural Networks/CNN/GAN/GAN.md
STEM/AI/Neural Networks/CNN/Interpretation.md
STEM/AI/Neural Networks/CNN/UpConv.md
STEM/AI/Neural Networks/Deep Learning.md
STEM/AI/Neural Networks/MLP/MLP.md
STEM/AI/Neural Networks/Properties+Capabilities.md
STEM/AI/Neural Networks/SLP/Least Mean Square.md
STEM/AI/Neural Networks/SLP/SLP.md
STEM/AI/Neural Networks/Transformers/Transformers.md
STEM/AI/Properties.md
STEM/CS/Language Binding.md
STEM/CS/Languages/dotNet.md
STEM/Signal Proc/Image/Image Processing.md
2023-06-06 11:48:49 +01:00
..
Attention.md vault backup: 2023-05-31 22:21:56 2023-05-31 22:21:56 +01:00
LLM.md vault backup: 2023-06-05 17:01:29 2023-06-05 17:01:29 +01:00
README.md vault backup: 2023-05-31 21:29:04 2023-05-31 21:29:04 +01:00
Transformers.md vault backup: 2023-06-06 11:48:49 2023-06-06 11:48:49 +01:00

  • Self-attention
    • Weighting significance of parts of the input
      • Including recursive output
  • Similar to RNNs
    • Process sequential data
    • Translation & text summarisation
    • Differences
      • Process input all at once
    • Largely replaced LSTM and gated recurrent units (GRU) which had attention mechanics
  • No recurrent structure

transformer-arch

Examples

  • BERT
    • Bidirectional Encoder Representations from Transformers
    • Google
  • Original GPT

transformers-explained-visually-part-1-overview-of-functionality

Architecture

Input

  • Byte-pair encoding tokeniser
  • Mapped via word embedding into vector
    • Positional information added

Encoder/Decoder

  • Similar to seq2seq models
  • Create internal representation
  • Encoder layers
    • Create encodings that contain information about which parts of input are relevant to each other
    • Subsequent encoder layers receive previous encoding layers output
  • Decoder layers
    • Takes encodings and does opposite
    • Uses incorporated textual information to produce output
    • Has attention to draw information from output of previous decoders before drawing from encoders
  • Both use Attention
  • Both use dense layers for additional processing of outputs
    • Contain residual connections & layer norm steps