andy
dcc57e2c85
Affected files: .obsidian/graph.json .obsidian/workspace-mobile.json .obsidian/workspace.json Gaming/Steam controllers.md History/Britain.md STEM/AI/Neural Networks/CNN/CNN.md STEM/AI/Neural Networks/CNN/FCN/FCN.md STEM/AI/Neural Networks/CNN/FCN/ResNet.md STEM/AI/Neural Networks/CV/Datasets.md STEM/AI/Neural Networks/Properties+Capabilities.md STEM/AI/Neural Networks/Transformers/Attention.md STEM/AI/Properties.md Tattoo/Engineering.md Tattoo/Sources.md Tattoo/img/snake-coil.png Untitled.canvas
2.0 KiB
2.0 KiB
- Meant to mimic cognitive attention
- Picks out relevant bits of information
- Use gradient descent
- Used in 90s
- Multiplicative modules
- Sigma pi units
- Hyper-networks
- Draw from relevant state at any preceding point along sequence
- Addresses RNNs vanishing gradient issues
- LSTM tends to poorly preserve far back Neural Networks#Knowledge
- Attention layer access all previous states and weighs according to learned measure of relevance
- Allows referring arbitrarily far back to relevant tokens
- Can be addd to RNNs
- In 2016, a new type of highly parallelisable decomposable attention was successfully combined with a Architectures network
- Attention useful in of itself, not just with RNNs
- Transformers use attention without recurrent connections
- Process all tokens simultaneously
- Calculate attention weights in successive layers
Scaled Dot-Product
- Calculate attention weights between all tokens at once
- Learn 3 Weight Init matrices
- Query
W_Q
- Key
W_K
- Value
W_V
- Query
- Word vectors
- For each token,
i
, input word embedding,x_i
- Multiply with each of above to produce vector
- Query Vector
q_i=x_iW_Q
- Key Vector
k_i=x_iW_K
- Value Vector
v_i=x_iW_V
- For each token,
- Attention vector
- Query and key vectors between token
i
andj
a_{ij}=q_i\cdot k_j
- Divided by root of dimensionality of key vectors
\sqrt{d_k}
- Pass through softmax to normalise
- Query and key vectors between token
W_Q
andW_K
are different matrices- Attention can be non-symmetric
- Token
i
attends toj
(q_i\cdot k_j
is large)- Doesn't imply that
j
attends toi
(q_j\cdot k_i
can be small)
- Doesn't imply that
- Output for token
i
is weighted sum of value vectors of all tokens weighted bya_{ij}
- Attention from token
i
to each other token
- Attention from token
Q, K, V
are matrices where $i$th row are vectorsq_i, k_i, v_i
respectively
\text{Attention}(Q,K,V)=\text{softmax}\left( \frac{QK^T}{\sqrt{d_k}} \right)V
- softmax taken over horizontal axis