From d7ab8f329a39354af325721ce30678cc809efe10 Mon Sep 17 00:00:00 2001 From: andy Date: Mon, 5 Jun 2023 17:01:29 +0100 Subject: [PATCH] vault backup: 2023-06-05 17:01:29 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Affected files: Money/Assets/Financial Instruments.md Money/Assets/Security.md Money/Markets/Markets.md Politcs/Now.md STEM/AI/Neural Networks/CNN/Examples.md STEM/AI/Neural Networks/CNN/FCN/FCN.md STEM/AI/Neural Networks/CNN/FCN/FlowNet.md STEM/AI/Neural Networks/CNN/FCN/Highway Networks.md STEM/AI/Neural Networks/CNN/FCN/ResNet.md STEM/AI/Neural Networks/CNN/FCN/Skip Connections.md STEM/AI/Neural Networks/CNN/FCN/Super-Resolution.md STEM/AI/Neural Networks/CNN/GAN/DC-GAN.md STEM/AI/Neural Networks/CNN/GAN/GAN.md STEM/AI/Neural Networks/CNN/GAN/StackGAN.md STEM/AI/Neural Networks/CNN/Inception Layer.md STEM/AI/Neural Networks/CNN/Interpretation.md STEM/AI/Neural Networks/CNN/Max Pooling.md STEM/AI/Neural Networks/CNN/Normalisation.md STEM/AI/Neural Networks/CNN/UpConv.md STEM/AI/Neural Networks/CV/Layer Structure.md STEM/AI/Neural Networks/MLP/MLP.md STEM/AI/Neural Networks/Neural Networks.md STEM/AI/Neural Networks/RNN/LSTM.md STEM/AI/Neural Networks/RNN/RNN.md STEM/AI/Neural Networks/RNN/VQA.md STEM/AI/Neural Networks/SLP/Least Mean Square.md STEM/AI/Neural Networks/SLP/Perceptron Convergence.md STEM/AI/Neural Networks/SLP/SLP.md STEM/AI/Neural Networks/Transformers/LLM.md STEM/AI/Neural Networks/Transformers/Transformers.md STEM/AI/Properties.md STEM/CS/Language Binding.md STEM/Light.md STEM/Maths/Tensor.md STEM/Quantum/Orbitals.md STEM/Quantum/Schrödinger.md STEM/Quantum/Standard Model.md STEM/Quantum/Wave Function.md Tattoo/Music.md Tattoo/Plans.md Tattoo/Sources.md --- AI/Neural Networks/CNN/Examples.md | 2 +- AI/Neural Networks/CNN/FCN/FCN.md | 13 ++++++------ AI/Neural Networks/CNN/FCN/FlowNet.md | 6 +++--- .../CNN/FCN/Highway Networks.md | 8 ++++---- AI/Neural Networks/CNN/FCN/ResNet.md | 6 +++--- .../CNN/FCN/Skip Connections.md | 10 +++++----- .../CNN/FCN/Super-Resolution.md | 4 ++-- AI/Neural Networks/CNN/GAN/DC-GAN.md | 4 ++-- AI/Neural Networks/CNN/GAN/GAN.md | 20 +++++++++---------- AI/Neural Networks/CNN/GAN/StackGAN.md | 6 +++--- AI/Neural Networks/CNN/Inception Layer.md | 4 ++-- AI/Neural Networks/CNN/Interpretation.md | 6 +++--- AI/Neural Networks/CNN/Max Pooling.md | 2 +- AI/Neural Networks/CNN/Normalisation.md | 2 +- AI/Neural Networks/CNN/UpConv.md | 14 ++++++------- AI/Neural Networks/CV/Layer Structure.md | 2 +- AI/Neural Networks/MLP/MLP.md | 12 +++++------ AI/Neural Networks/Neural Networks.md | 2 +- AI/Neural Networks/RNN/LSTM.md | 6 +++--- AI/Neural Networks/RNN/RNN.md | 4 ++-- AI/Neural Networks/RNN/VQA.md | 6 +++--- AI/Neural Networks/SLP/Least Mean Square.md | 10 +++++----- .../SLP/Perceptron Convergence.md | 2 +- AI/Neural Networks/SLP/SLP.md | 4 ++-- AI/Neural Networks/Transformers/LLM.md | 6 +++--- .../Transformers/Transformers.md | 8 ++++---- AI/Properties.md | 4 ++-- CS/Language Binding.md | 6 +++--- Light.md | 6 +++--- Maths/Tensor.md | 2 +- Quantum/Orbitals.md | 12 +++++------ Quantum/Schrödinger.md | 2 +- Quantum/Standard Model.md | 6 +++--- Quantum/Wave Function.md | 14 ++++++------- 34 files changed, 110 insertions(+), 111 deletions(-) diff --git a/AI/Neural Networks/CNN/Examples.md b/AI/Neural Networks/CNN/Examples.md index 87ed0e7..ff331fb 100644 --- a/AI/Neural Networks/CNN/Examples.md +++ b/AI/Neural Networks/CNN/Examples.md @@ -28,7 +28,7 @@ # GoogLeNet 2015 -- [[Inception Layer]]s +- [Inception Layer](Inception%20Layer.md)s - Multiple [[Deep Learning#Loss Function|Loss]] Functions ![googlenet](../../../img/googlenet.png) diff --git a/AI/Neural Networks/CNN/FCN/FCN.md b/AI/Neural Networks/CNN/FCN/FCN.md index 74610a3..7c18751 100644 --- a/AI/Neural Networks/CNN/FCN/FCN.md +++ b/AI/Neural Networks/CNN/FCN/FCN.md @@ -1,4 +1,4 @@ -Fully [[Convolution]]al Network +Fully [Convolution](../../../../Signal%20Proc/Convolution.md)al Network [[Convolutional Layer|Convolutional]] and [[UpConv|up-convolutional layers]] with [[Activation Functions#ReLu|ReLu]] but no others (pooling) - All some sort of Encoder-Decoder @@ -9,12 +9,11 @@ Contractive → [UpConv](../UpConv.md) - For visual output - Previously image $\rightarrow$ vector - Additional layers to up-sample representation to an image - - Up-[[convolution]]al - - De-[[convolution]]al + - Up-[convolution](../../../../Signal%20Proc/Convolution.md)al + - De-[convolution](../../../../Signal%20Proc/Convolution.md)al -![[fcn-uses.png]] - -![[fcn-arch.png]] +![fcn-uses](../../../../img/fcn-uses.png) +![fcn-arch](../../../../img/fcn-arch.png) # Training - Rarely from scratch @@ -22,7 +21,7 @@ Contractive → [UpConv](../UpConv.md) - Replace final layers - [[MLP|FC]] layers - White-noise initialised -- Add [[upconv]] layer(s) +- Add [UpConv](../UpConv.md) layer(s) - Fine-tune train - Freeze others - Annotated GT images diff --git a/AI/Neural Networks/CNN/FCN/FlowNet.md b/AI/Neural Networks/CNN/FCN/FlowNet.md index 4148ae9..77dd834 100644 --- a/AI/Neural Networks/CNN/FCN/FlowNet.md +++ b/AI/Neural Networks/CNN/FCN/FlowNet.md @@ -7,16 +7,16 @@ Optical Flow ![flownet](../../../../img/flownet.png) -# [[Skip Connections]] +# [Skip Connections](Skip%20Connections.md) - Further through the network information is condensed - Less high frequency information -- Link encoder layers to [[upconv]] layers +- Link encoder layers to [UpConv](../UpConv.md) layers - Append activation maps from encoder to decoder # Encode ![flownet-encode](../../../../img/flownet-encode.png) -# [[Upconv]] +# [UpConv](../UpConv.md) ![flownet-upconv](../../../../img/flownet-upconv.png) # Training diff --git a/AI/Neural Networks/CNN/FCN/Highway Networks.md b/AI/Neural Networks/CNN/FCN/Highway Networks.md index 18499e4..56b7c61 100644 --- a/AI/Neural Networks/CNN/FCN/Highway Networks.md +++ b/AI/Neural Networks/CNN/FCN/Highway Networks.md @@ -1,9 +1,9 @@ -- [[Skip connections]] across individual layers +- [Skip Connections](Skip%20Connections.md) across individual layers - Conditionally - Soft gates - Learn vs carry - Gradients propagate further -- Inspired by [[LSTM]] [[RNN]]s +- Inspired by [LSTM](../../RNN/LSTM.md) [RNN](../../RNN/RNN.md)s -![[highway-vs-residual.png]] -![[skip-connections 1.png]] \ No newline at end of file +![highway-vs-residual](../../../../img/highway-vs-residual.png) +![skip-connections 1](../../../../img/skip-connections%201.png) \ No newline at end of file diff --git a/AI/Neural Networks/CNN/FCN/ResNet.md b/AI/Neural Networks/CNN/FCN/ResNet.md index e2617eb..1b90d0c 100644 --- a/AI/Neural Networks/CNN/FCN/ResNet.md +++ b/AI/Neural Networks/CNN/FCN/ResNet.md @@ -24,7 +24,7 @@ - No dropout [[Datasets#ImageNet|ImageNet]] Error: -![[imagenet-error.png]] +![imagenet-error](../../../../img/imagenet-error.png) -![[resnet-arch.png]] -![[resnet-arch2.png]] \ No newline at end of file +![resnet-arch](../../../../img/resnet-arch.png) +![resnet-arch2](../../../../img/resnet-arch2.png) \ No newline at end of file diff --git a/AI/Neural Networks/CNN/FCN/Skip Connections.md b/AI/Neural Networks/CNN/FCN/Skip Connections.md index fa602f1..40c1654 100644 --- a/AI/Neural Networks/CNN/FCN/Skip Connections.md +++ b/AI/Neural Networks/CNN/FCN/Skip Connections.md @@ -1,16 +1,16 @@ -- Output of [[Convolutional Layer|conv]], c, layers are added to inputs of [[upconv]], d, layers +- Output of [[Convolutional Layer|conv]], c, layers are added to inputs of [UpConv](../UpConv.md), d, layers - Element-wise, not channel appending - Propagate high frequency information to later layers - Two types - Additive - - [[ResNet]] - - [[Super-resolution]] auto-encoder + - [ResNet](ResNet.md) + - [Super-Resolution](Super-Resolution.md) auto-encoder - Concatenative - Densely connected architectures - DenseNet - - [[FlowNet]] + - [FlowNet](FlowNet.md) -![[STEM/img/skip-connections.png]] +![skip-connections](../../../../img/skip-connections.png) [AI Summer - Skip Connections](https://theaisummer.com/skip-connections/) [Arxiv - Visualising the Loss Landscape](https://arxiv.org/abs/1712.09913) \ No newline at end of file diff --git a/AI/Neural Networks/CNN/FCN/Super-Resolution.md b/AI/Neural Networks/CNN/FCN/Super-Resolution.md index c624141..ca66cf2 100644 --- a/AI/Neural Networks/CNN/FCN/Super-Resolution.md +++ b/AI/Neural Networks/CNN/FCN/Super-Resolution.md @@ -7,6 +7,6 @@ - Unsupervised? - Decoder stage - Identical architecture to encoder -![[super-res.png]] +![super-res](../../../../img/super-res.png) - Is actually contractive/up sampling -![[superres-results.png]] \ No newline at end of file +![superres-results](../../../../img/superres-results.png) \ No newline at end of file diff --git a/AI/Neural Networks/CNN/GAN/DC-GAN.md b/AI/Neural Networks/CNN/GAN/DC-GAN.md index 308b4c1..b39c54e 100644 --- a/AI/Neural Networks/CNN/GAN/DC-GAN.md +++ b/AI/Neural Networks/CNN/GAN/DC-GAN.md @@ -12,8 +12,8 @@ Deep [Convolutional](../../../../Signal%20Proc/Convolution.md) [GAN](GAN.md) - Train using Gaussian random noise for code - Discriminator - Contractive - - Cross-entropy [[Deep Learning#Loss Function|loss]] - - [[Convolutional Layer|Conv]] and leaky [[Activation Functions#ReLu|ReLu]] layers only + - Cross-entropy [loss](../../Deep%20Learning.md#Loss%20Function) + - [Conv](../Convolutional%20Layer.md) and leaky [[Activation Functions#ReLu|ReLu]] layers only - Normalised output via [[Activation Functions#Sigmoid|sigmoid]] ## [[Deep Learning#Loss Function|Loss]] diff --git a/AI/Neural Networks/CNN/GAN/GAN.md b/AI/Neural Networks/CNN/GAN/GAN.md index a94f424..fce9554 100644 --- a/AI/Neural Networks/CNN/GAN/GAN.md +++ b/AI/Neural Networks/CNN/GAN/GAN.md @@ -1,6 +1,6 @@ -# Fully [[Convolution]]al -- Remove [[Max Pooling]] - - Use strided [[upconv]] +# Fully [Convolution](../../../../Signal%20Proc/Convolution.md)al +- Remove [Max Pooling](../Max%20Pooling.md) + - Use strided [UpConv](../UpConv.md) - Remove [[MLP|FC]] layers - Hurts convergence in non-classification - Normalisation tricks @@ -16,16 +16,16 @@ - Discriminator is a classifier - Is image fake or real -![[gan-arch.png]] -![[gan-arch2.png]] +![gan-arch](../../../../img/gan-arch.png) +![gan-arch2](../../../../img/gan-arch2.png) -![[gan-results.png]] +![gan-results](../../../../img/gan-results.png)] # Training -![[gan-training-discriminator.png]] -![[gan-training-generator.png]] +![gan-training-discriminator](../../../../img/gan-training-discriminator.png) +![gan-training-generator](../../../../img/gan-training-generator.png) # Code Vector Math for Control -![[cvmfc.png]] +![cvmfc](../../../../img/cvmfc.png) - Do [[Interpretation#Activation Maximisation|AM]] to derive code for an image -![[code-vector-math-for-control-results.png]] \ No newline at end of file +![code-vector-math-for-control-results](../../../../img/code-vector-math-for-control-results.png) \ No newline at end of file diff --git a/AI/Neural Networks/CNN/GAN/StackGAN.md b/AI/Neural Networks/CNN/GAN/StackGAN.md index bee4760..b0db9bb 100644 --- a/AI/Neural Networks/CNN/GAN/StackGAN.md +++ b/AI/Neural Networks/CNN/GAN/StackGAN.md @@ -1,6 +1,6 @@ - Feed output from synthesis into up-res network - Generate standard low-res image - - Feed into [[cGAN]] + - Feed into [cGAN](cGAN.md) -![[stackgan.png]] -![[stackgan-results.png]] \ No newline at end of file +![stackgan](../../../../img/stackgan.png) +![stackgan-results](../../../../img/stackgan-results.png) \ No newline at end of file diff --git a/AI/Neural Networks/CNN/Inception Layer.md b/AI/Neural Networks/CNN/Inception Layer.md index feae7d5..d91172c 100644 --- a/AI/Neural Networks/CNN/Inception Layer.md +++ b/AI/Neural Networks/CNN/Inception Layer.md @@ -3,8 +3,8 @@ - Couple of different scales - Concatenate results -![[inception-layer-effect.png]] -![[inception-layer-arch.png]] +![inception-layer-effect](../../../img/inception-layer-effect.png) +![inception-layer-arch](../../../img/inception-layer-arch.png) - 1 x 1 - Averages over channels diff --git a/AI/Neural Networks/CNN/Interpretation.md b/AI/Neural Networks/CNN/Interpretation.md index b35fe90..62b6994 100644 --- a/AI/Neural Networks/CNN/Interpretation.md +++ b/AI/Neural Networks/CNN/Interpretation.md @@ -3,7 +3,7 @@ - Maximise 1-hot output - Maximise [[Activation Functions#SoftMax|SoftMax]] -![[am.png]] +![am](../../../img/am.png) - **Use trained network** - Don't update weights - [[Architectures|Feedforward]] noise @@ -11,7 +11,7 @@ - Don't update weights - Update image -![[am-process.png]] +![am-process](../../../img/am-process.png) ## Regulariser - Fit to natural image statistics - Prone to high frequency noise @@ -24,7 +24,7 @@ $$x^*=\text{argmin}_{x\in \mathbb R^{H\times W\times C}}\mathcal l(\phi(x),\phi_ $$x^*=\text{argmin}_{x\in \mathbb R^{H\times W\times C}}\mathcal l(\phi(x),\phi_0)+\lambda\mathcal R(x)$$ - Need a regulariser like above -![[am-regulariser.png]] +![am-regulariser](../../../img/am-regulariser.png) $$\mathcal R_{V^\beta}(f)=\int_\Omega\left(\left(\frac{\partial f}{\partial u}(u,v)\right)^2+\left(\frac{\partial f}{\partial v}(u,v)\right)^2\right)^{\frac \beta 2}du\space dv$$ diff --git a/AI/Neural Networks/CNN/Max Pooling.md b/AI/Neural Networks/CNN/Max Pooling.md index 86c58c4..a2dadc8 100644 --- a/AI/Neural Networks/CNN/Max Pooling.md +++ b/AI/Neural Networks/CNN/Max Pooling.md @@ -5,7 +5,7 @@ - Max value is the good bit - No parameters -![[max-pooling.png]] +![max-pooling](../../../img/max-pooling.png) ## Design Parameters - Size of input image diff --git a/AI/Neural Networks/CNN/Normalisation.md b/AI/Neural Networks/CNN/Normalisation.md index 7e798e3..cf17f1d 100644 --- a/AI/Neural Networks/CNN/Normalisation.md +++ b/AI/Neural Networks/CNN/Normalisation.md @@ -2,4 +2,4 @@ - Apply kernel to same location of all channels - Pixels in window divided by sum of pixel within volume across channels -![[cnn-normalisation.png]] \ No newline at end of file +![cnn-normalisation](../../../img/cnn-normalisation.png) \ No newline at end of file diff --git a/AI/Neural Networks/CNN/UpConv.md b/AI/Neural Networks/CNN/UpConv.md index c092e77..9a18707 100644 --- a/AI/Neural Networks/CNN/UpConv.md +++ b/AI/Neural Networks/CNN/UpConv.md @@ -1,11 +1,11 @@ - Fractionally strided convolution -- Transposed [[convolution]] +- Transposed [Convolution](../../../Signal%20Proc/Convolution.md) - Like a deep interpolation - Convolution with a fractional input stride - Up-sampling is convolution 'in reverse' - Not an actual inverse convolution - For scaling up by a factor of $f$ - - Consider as a [[convolution]] of stride $1/f$ + - Consider as a [Convolution](../../../Signal%20Proc/Convolution.md) of stride $1/f$ - Could specify kernel - Or learn - Can have multiple upconv layers @@ -13,23 +13,23 @@ - For non-linear up-sampling conv - Interpolation is linear -![[upconv.png]] +![upconv](../../../img/upconv.png) # Convolution Matrix Normal -![[upconv-matrix.png]] +![upconv-matrix](../../../img/upconv-matrix.png) - Equivalent operation with a flattened input - Row per kernel location - Many-to-one operation -![[upconv-matrix-result.png]] +![upconv-matrix-result](../../../img/upconv-matrix-result.png) [Understanding transposed convolutions](https://www.machinecurve.com/index.php/2019/09/29/understanding-transposed-convolutions/) ## Transposed -![[upconv-transposed-matrix.png]] +![upconv-transposed-matrix](../../../img/upconv-transposed-matrix.png) - One-to-many -![[upconv-matrix-transposed-result.png]] \ No newline at end of file +![upconv-matrix-transposed-result](../../../img/upconv-matrix-transposed-result.png) \ No newline at end of file diff --git a/AI/Neural Networks/CV/Layer Structure.md b/AI/Neural Networks/CV/Layer Structure.md index 3d14d5c..3c838f0 100644 --- a/AI/Neural Networks/CV/Layer Structure.md +++ b/AI/Neural Networks/CV/Layer Structure.md @@ -1 +1 @@ -![[cnn-cv-layer-arch.png]] \ No newline at end of file +![cnn-cv-layer-arch](../../../img/cnn-cv-layer-arch.png) \ No newline at end of file diff --git a/AI/Neural Networks/MLP/MLP.md b/AI/Neural Networks/MLP/MLP.md index eb51b04..6a9edf7 100644 --- a/AI/Neural Networks/MLP/MLP.md +++ b/AI/Neural Networks/MLP/MLP.md @@ -3,20 +3,20 @@ - Universal approximation theorem - Each hidden layer can operate as a different feature extraction layer - Lots of [[Weight Init|weights]] to learn -- [[Back-Propagation]] is supervised +- [Back-Propagation](Back-Propagation.md) is supervised -![[mlp-arch.png]] +![mlp-arch](../../../img/mlp-arch.png) # Universal Approximation Theory A finite [[Architectures|feedforward]] MLP with 1 hidden layer can in theory approximate any mathematical function - In practice not trainable with [[Back-Propagation|BP]] -![[activation-function.png]] -![[mlp-arch-diagram.png]] +![activation-function](../../../img/activation-function.png) +![mlp-arch-diagram](../../../img/mlp-arch-diagram.png) ## Weight Matrix - Use matrix multiplication for layer output - TLU is hard limiter -![[tlu.png]] +![tlu](../../../img/tlu.png) - $o_1$ to $o_4$ must all be one to overcome -3.5 bias and force output to 1 -![[mlp-non-linear-decision.png]] +![mlp-non-linear-decision](../../../img/mlp-non-linear-decision.png) - Can generate a non-linear [[Decision Boundary|decision boundary]] \ No newline at end of file diff --git a/AI/Neural Networks/Neural Networks.md b/AI/Neural Networks/Neural Networks.md index bb7cf8b..8d8040f 100644 --- a/AI/Neural Networks/Neural Networks.md +++ b/AI/Neural Networks/Neural Networks.md @@ -7,7 +7,7 @@ - Interneuron connection strengths store acquired knowledge - Synaptic weights -![[slp-arch.png]] +![slp-arch](../../img/slp-arch.png) A neural network is a directed graph consisting of nodes with interconnecting synaptic and activation links, and is characterised by four properties diff --git a/AI/Neural Networks/RNN/LSTM.md b/AI/Neural Networks/RNN/LSTM.md index 10ba090..f81b270 100644 --- a/AI/Neural Networks/RNN/LSTM.md +++ b/AI/Neural Networks/RNN/LSTM.md @@ -1,7 +1,7 @@ Long Short Term Memory -- More general form of [[RNN]] +- More general form of [RNN](RNN.md) - Explicitly encode memory state, C -![[lstm.png]] -![[lstm-slp.png]] \ No newline at end of file +![lstm](../../../img/lstm.png) +![lstm-slp](../../../img/lstm-slp.png) \ No newline at end of file diff --git a/AI/Neural Networks/RNN/RNN.md b/AI/Neural Networks/RNN/RNN.md index 31cd7c9..7ea85ed 100644 --- a/AI/Neural Networks/RNN/RNN.md +++ b/AI/Neural Networks/RNN/RNN.md @@ -13,5 +13,5 @@ Recurrent Neural Network - In practice suffers from vanishing gradient - Can't extract precise information about previous tokens -![[rnn-input.png]] -![[rnn-recurrence.png]] \ No newline at end of file +![rnn-input](../../../img/rnn-input.png) +![rnn-recurrence](../../../img/rnn-recurrence.png) \ No newline at end of file diff --git a/AI/Neural Networks/RNN/VQA.md b/AI/Neural Networks/RNN/VQA.md index f84c355..e775976 100644 --- a/AI/Neural Networks/RNN/VQA.md +++ b/AI/Neural Networks/RNN/VQA.md @@ -1,17 +1,17 @@ Visual Question Answering - Combine visual with text sequence - - [[CNN]] + [[LSTM]] + - [CNN](../CNN/CNN.md) + [LSTM](LSTM.md) - Generate text from images - Automatic scene description - Cross-modal -![[cnn+lstm.png]] +![cnn+lstm](../../../img/cnn+lstm.png) - Word embedding not character # Freeform - Encode facts with two text streams -![[vqa-block.png]] +![vqa-block](../../../img/vqa-block.png) # Limitations - Repetitive answers - Not much variation diff --git a/AI/Neural Networks/SLP/Least Mean Square.md b/AI/Neural Networks/SLP/Least Mean Square.md index 9e45812..05ab0ea 100644 --- a/AI/Neural Networks/SLP/Least Mean Square.md +++ b/AI/Neural Networks/SLP/Least Mean Square.md @@ -59,16 +59,16 @@ $$\hat{w}(n+1)=\hat{w}(n)+\eta \cdot x(n) \cdot e(n)$$ - Sensitivity to variation in eigenstructure of input - Typically requires iterations of 10 x dimensionality of the input space - Worse with high-d input spaces -![[slp-mse.png]] +![slp-mse](../../../img/slp-mse.png) - Use steepest descent - Partial derivatives -![[slp-steepest-descent.png]] +![slp-steepest-descent](../../../img/slp-steepest-descent.png) - Can be solved by matrix inversion - Stochastic - Random progress - Will overall improve -![[lms-algorithm.png]] +![lms-algorithm](../../../img/lms-algorithm.png) $$\hat{w}(n+1)=\hat{w}(n)+\eta\cdot x(n)\cdot[d(n)-x^T(n)\cdot\hat w(n)]$$ $$=[I-\eta\cdot x(n)x^T(n)]\cdot\hat{w}(n)+\eta\cdot x(n)\cdot d(n)$$ @@ -76,6 +76,6 @@ $$=[I-\eta\cdot x(n)x^T(n)]\cdot\hat{w}(n)+\eta\cdot x(n)\cdot d(n)$$ Where $$\hat w(n)=z^{-1}[\hat w(n+1)]$$ ## Independence Theory -![[slp-lms-independence.png]] +![slp-lms-independence](../../../img/slp-lms-independence.png) -![[sl-lms-summary.png]] \ No newline at end of file +![sl-lms-summary](../../../img/sl-lms-summary.png) \ No newline at end of file diff --git a/AI/Neural Networks/SLP/Perceptron Convergence.md b/AI/Neural Networks/SLP/Perceptron Convergence.md index 9fe78be..5a54d8b 100644 --- a/AI/Neural Networks/SLP/Perceptron Convergence.md +++ b/AI/Neural Networks/SLP/Perceptron Convergence.md @@ -39,4 +39,4 @@ $$ 2. Fast adaptation with respect to real changes in the underlying distribution of process responsible for $x$ - Large eta -![[slp-separable.png]] \ No newline at end of file +![slp-separable](../../../img/slp-separable.png) \ No newline at end of file diff --git a/AI/Neural Networks/SLP/SLP.md b/AI/Neural Networks/SLP/SLP.md index d12b757..4efb7df 100644 --- a/AI/Neural Networks/SLP/SLP.md +++ b/AI/Neural Networks/SLP/SLP.md @@ -1,7 +1,7 @@ -![[slp-arch.png]] +![slp-arch](../../../img/slp-arch.png) $$v(n)=\sum_{i=0}^{m}w_i(n)x_i(n)$$ $$=w^T(n)x(n)$$ -![[slp-hyperplane.png]] +![slp-hyperplane](../../../img/slp-hyperplane.png) Perceptron learning is performed for a finite number of iteration and then stops [[Least Mean Square|LMS]] is continuous learning that doesn't stop \ No newline at end of file diff --git a/AI/Neural Networks/Transformers/LLM.md b/AI/Neural Networks/Transformers/LLM.md index 4e30cd8..3537fc7 100644 --- a/AI/Neural Networks/Transformers/LLM.md +++ b/AI/Neural Networks/Transformers/LLM.md @@ -8,9 +8,9 @@ ## Hallucination # Architectures -Mostly [[Transformers]] +Mostly [Transformers](Transformers.md) ## GPT -Generative Pre-trained [[Transformers]] +Generative Pre-trained [Transformers](Transformers.md) -![[llm-family-tree.png]] \ No newline at end of file +![llm-family-tree](../../../img/llm-family-tree.png) \ No newline at end of file diff --git a/AI/Neural Networks/Transformers/Transformers.md b/AI/Neural Networks/Transformers/Transformers.md index 5c4e8ec..681016d 100644 --- a/AI/Neural Networks/Transformers/Transformers.md +++ b/AI/Neural Networks/Transformers/Transformers.md @@ -1,15 +1,15 @@ - [[Attention|Self-attention]] - Weighting significance of parts of the input - Including recursive output -- Similar to [[RNN]]s +- Similar to [RNN](../RNN/RNN.md)s - Process sequential data - Translation & text summarisation - Differences - Process input all at once - - Largely replaced [[LSTM]] and gated recurrent units (GRU) which had attention mechanics + - Largely replaced [LSTM](../RNN/LSTM.md) and gated recurrent units (GRU) which had attention mechanics - No recurrent structure -![[transformer-arch.png]] +![transformer-arch](../../../img/transformer-arch.png) ## Examples - BERT @@ -34,6 +34,6 @@ - Takes encodings and does opposite - Uses incorporated textual information to produce output - Has attention to draw information from output of previous decoders before drawing from encoders -- Both use [[attention]] +- Both use [Attention](Attention.md) - Both use [[MLP|dense]] layers for additional processing of outputs - Contain residual connections & layer norm steps \ No newline at end of file diff --git a/AI/Properties.md b/AI/Properties.md index 212f9bc..5673b89 100644 --- a/AI/Properties.md +++ b/AI/Properties.md @@ -16,7 +16,7 @@ An AI system must be able to 2. Apply knowledge to solve problems 3. Acquire new knowledge through experience -![[ai-nested-subjects.png]] +![ai-nested-subjects](../img/ai-nested-subjects.png) # Expert Systems - Usually easier to obtain compiled experience from experts than duplicate experience that made them experts for network @@ -56,4 +56,4 @@ Symbolic AI is the formal manipulation of a language of algorithms and data repr Neural nets bottom-up -![[ai-io.png]] \ No newline at end of file +![ai-io](../img/ai-io.png) \ No newline at end of file diff --git a/CS/Language Binding.md b/CS/Language Binding.md index 38c5369..e6d8f4c 100644 --- a/CS/Language Binding.md +++ b/CS/Language Binding.md @@ -5,18 +5,18 @@ ### Object Models - COM - - [[C++]] + - [C++](Languages/C++.md) - Component Object Model - MS only cross-language model - CLI - - [[dotNet]] + - [dotNet](Languages/dotNet.md) - .NET Common Language Infrastructure - Freedesktop.org D-Bus - Open cross-platform-language model ### Virtual Machines - CLR - - [[dotNet]] + - [dotNet](Languages/dotNet.md) - .NET Common Language Runtime - Mono - CLI languages diff --git a/Light.md b/Light.md index 1321a3f..9561c36 100644 --- a/Light.md +++ b/Light.md @@ -9,9 +9,9 @@ $$E=hf$$ 2. There is a minimum frequency n0below which there is no emission 3. No time delay (less than 1 ns) before the onset of emission –but the rate of electrons depends on the intensity. -![[photo-electric.png]] -![[fermi-vacuum-level.png]] +![photo-electric](img/photo-electric.png) +![fermi-vacuum-level](img/fermi-vacuum-level.png) -![[em-spectrum.png]] +![em-spectrum](img/em-spectrum.png) - Radio spectrum - 30Hz – 300 GHz \ No newline at end of file diff --git a/Maths/Tensor.md b/Maths/Tensor.md index 61d5e54..55ea0ea 100644 --- a/Maths/Tensor.md +++ b/Maths/Tensor.md @@ -11,7 +11,7 @@ - Cube matrix Matrices are not inherently rank-2 tensors. Matrices are just the formatting structure. The tensor described by the matrix must follow the transformation rules to be a tensor -![[tensor.png]] +![tensor](../img/tensor.png) # Transformation Rules 1. Transforms like a tensor diff --git a/Quantum/Orbitals.md b/Quantum/Orbitals.md index 4dde64e..0469ecc 100644 --- a/Quantum/Orbitals.md +++ b/Quantum/Orbitals.md @@ -1,4 +1,4 @@ -[[Wave Function]] +[Wave Function](Wave%20Function.md) ## Quantum Numbers $$n$$ @@ -17,7 +17,7 @@ Z-component / Magentic of $l$ - $-l$ to $+l$ - ***Orientation*** of orbital -![[wave-function-polar-segment.png]] +![wave-function-polar-segment](../img/wave-function-polar-segment.png) ## Filling @@ -30,11 +30,11 @@ Z-component / Magentic of $l$ - Orbitals with same energy filled one at a time - Degenerate -![[orbitals-radius.png]] -![[wave-function-nodes.png]] +![orbitals-radius](../img/orbitals-radius.png) +![wave-function-nodes](../img/wave-function-nodes.png) ## Radial -![[radial-equations.png]] +![radial-equations](../img/radial-equations.png) - Z = Atomic number - Bohr radius @@ -42,4 +42,4 @@ Z-component / Magentic of $l$ - Normalisation - $\int_0^\infty r^2R_{nl}^*R_{nl}dr=1$ -![[radius-electron-density-wf.png]] \ No newline at end of file +![radius-electron-density-wf](../img/radius-electron-density-wf.png) \ No newline at end of file diff --git a/Quantum/Schrödinger.md b/Quantum/Schrödinger.md index 5293df3..6877102 100644 --- a/Quantum/Schrödinger.md +++ b/Quantum/Schrödinger.md @@ -1,6 +1,6 @@ $$-\frac{\hbar^2}{2m}\nabla^2\psi+V\psi=E\psi$$ - Time Independent -- $\psi$ is the [[Wave Function]] +- $\psi$ is the [Wave Function](Wave%20Function.md) Quantum counterpart of Newton's second law in classical mechanics diff --git a/Quantum/Standard Model.md b/Quantum/Standard Model.md index 5728e8e..50fc380 100644 --- a/Quantum/Standard Model.md +++ b/Quantum/Standard Model.md @@ -1,4 +1,4 @@ -![[model-table.png]] +![model-table](../img/model-table.png) - 4 fundamental forces - Bosons - Elementary particles @@ -22,5 +22,5 @@ - Force carriers - y, W, Z, g -![[boson-interactions-feynman.png]] -![[boson-interactions.png]] \ No newline at end of file +![boson-interactions-feynman](../img/boson-interactions-feynman.png) +![boson-interactions](../img/boson-interactions.png) \ No newline at end of file diff --git a/Quantum/Wave Function.md b/Quantum/Wave Function.md index b5bd215..6796902 100644 --- a/Quantum/Wave Function.md +++ b/Quantum/Wave Function.md @@ -5,17 +5,17 @@ Radial Function Spherical Harmonic - $Y_{ml}(\theta, \phi)$ -Forms [[Orbitals]] +Forms [Orbitals](Orbitals.md) Absolute value of wave function squared gives probability density of finding electron inside differential volume $dV$ centred on $r, \theta, \phi$ $$|\psi(r,\theta,\phi)|^2$$ -![[wave-function-polar.png]] +![wave-function-polar](../img/wave-function-polar.png) -![[hydrogen-wave-function.png]] -![[wave-function-polar-segment.png]] -![[wave-function-nodes.png]] +![hydrogen-wave-function](../img/hydrogen-wave-function.png) +![wave-function-polar-segment](../img/wave-function-polar-segment.png) +![wave-function-nodes](../img/wave-function-nodes.png) -![[hydrogen-electron-density.png]] +![hydrogen-electron-density](../img/hydrogen-electron-density.png) -![[radius-electron-density-wf.png]] \ No newline at end of file +![radius-electron-density-wf](../img/radius-electron-density-wf.png) \ No newline at end of file