andy
5a592c8c7c
Affected files: .obsidian/graph.json .obsidian/workspace-mobile.json .obsidian/workspace.json STEM/AI/Ethics.md STEM/AI/Neural Networks/Activation Functions.md STEM/AI/Neural Networks/CNN/CNN.md STEM/AI/Neural Networks/Deep Learning.md STEM/AI/Neural Networks/MLP/Back-Propagation.md STEM/AI/Neural Networks/MLP/MLP.md STEM/AI/Neural Networks/Neural Networks.md STEM/AI/Neural Networks/Properties+Capabilities.md STEM/AI/Neural Networks/RNN/LSTM.md STEM/AI/Neural Networks/RNN/RNN.md STEM/AI/Neural Networks/RNN/VQA.md STEM/AI/Neural Networks/SLP/SLP.md STEM/AI/Neural Networks/Training.md STEM/AI/Neural Networks/Transformers/Attention.md STEM/AI/Neural Networks/Transformers/LLM.md STEM/AI/Neural Networks/Transformers/Transformers.md STEM/Signal Proc/System Classes.md STEM/img/back-prop-equations.png STEM/img/back-prop-weight-changes.png STEM/img/back-prop1.png STEM/img/back-prop2.png STEM/img/cnn+lstm.png STEM/img/deep-digit-classification.png STEM/img/deep-loss-function.png STEM/img/llm-family-tree.png STEM/img/lstm-slp.png STEM/img/lstm.png STEM/img/matrix-dot-product.png STEM/img/ml-dl.png STEM/img/photo-tensor.png STEM/img/relu.png STEM/img/rnn-input.png STEM/img/rnn-recurrence.png STEM/img/slp-arch.png STEM/img/threshold-activation.png STEM/img/transformer-arch.png STEM/img/vqa-block.png
1.9 KiB
1.9 KiB
- Meant to mimic cognitive attention
- Picks out relevant bits of information
- Use gradient descent
- Used in 90s
- Multiplicative modules
- Sigma pi units
- Hyper-networks
- Draw from relevant state at any preceding point along sequence
- Attention layer access all previous states and weighs according to learned measure of relevance
- Allows referring arbitrarily far back to relevant tokens
- Can be addd to RNNs
- In 2016, a new type of highly parallelisable decomposable attention was successfully combined with a feedforward network
- Attention useful in of itself, not just with RNNs
- Transformers use attention without recurrent connections
- Process all tokens simultaneously
- Calculate attention weights in successive layers
Scaled Dot-Product
- Calculate attention weights between all tokens at once
- Learn 3 weight matrices
- Query
W_Q
- Key
W_K
- Value
W_V
- Query
- Word vectors
- For each token,
i
, input word embedding,x_i
- Multiply with each of above to produce vector
- Query Vector
q_i=x_iW_Q
- Key Vector
k_i=x_iW_K
- Value Vector
v_i=x_iW_V
- For each token,
- Attention vector
- Query and key vectors between token
i
andj
a_{ij}=q_i\cdot k_j
- Divided by root of dimensionality of key vectors
\sqrt{d_k}
- Pass through softmax to normalise
- Query and key vectors between token
W_Q
andW_K
are different matrices- Attention can be non-symmetric
- Token
i
attends toj
(q_i\cdot k_j
is large)- Doesn't imply that
j
attends toi
(q_j\cdot k_i
can be small)
- Doesn't imply that
- Output for token
i
is weighted sum of value vectors of all tokens weighted bya_{ij}
- Attention from token
i
to each other token
- Attention from token
Q, K, V
are matrices where $i$th row are vectorsq_i, k_i, v_i
respectively
\text{Attention}(Q,K,V)=\text{softmax}\left( \frac{QK^T}{\sqrt{d_k}} \right)V
- softmax taken over horizontal axis