From 7bc4dffd8be54d56fbd7b155291df09ee94814f8 Mon Sep 17 00:00:00 2001 From: andy Date: Tue, 6 Jun 2023 11:48:49 +0100 Subject: [PATCH] vault backup: 2023-06-06 11:48:49 Affected files: STEM/AI/Neural Networks/CNN/Examples.md STEM/AI/Neural Networks/CNN/FCN/FCN.md STEM/AI/Neural Networks/CNN/FCN/ResNet.md STEM/AI/Neural Networks/CNN/FCN/Skip Connections.md STEM/AI/Neural Networks/CNN/GAN/DC-GAN.md STEM/AI/Neural Networks/CNN/GAN/GAN.md STEM/AI/Neural Networks/CNN/Interpretation.md STEM/AI/Neural Networks/CNN/UpConv.md STEM/AI/Neural Networks/Deep Learning.md STEM/AI/Neural Networks/MLP/MLP.md STEM/AI/Neural Networks/Properties+Capabilities.md STEM/AI/Neural Networks/SLP/Least Mean Square.md STEM/AI/Neural Networks/SLP/SLP.md STEM/AI/Neural Networks/Transformers/Transformers.md STEM/AI/Properties.md STEM/CS/Language Binding.md STEM/CS/Languages/dotNet.md STEM/Signal Proc/Image/Image Processing.md --- AI/Neural Networks/CNN/Examples.md | 6 +++--- AI/Neural Networks/CNN/FCN/FCN.md | 6 +++--- AI/Neural Networks/CNN/FCN/ResNet.md | 8 ++++---- AI/Neural Networks/CNN/FCN/Skip Connections.md | 2 +- AI/Neural Networks/CNN/GAN/DC-GAN.md | 6 +++--- AI/Neural Networks/CNN/GAN/GAN.md | 6 +++--- AI/Neural Networks/CNN/Interpretation.md | 8 ++++---- AI/Neural Networks/CNN/UpConv.md | 2 +- AI/Neural Networks/Deep Learning.md | 2 +- AI/Neural Networks/MLP/MLP.md | 10 +++++----- AI/Neural Networks/Properties+Capabilities.md | 2 +- AI/Neural Networks/SLP/Least Mean Square.md | 2 +- AI/Neural Networks/SLP/SLP.md | 2 +- AI/Neural Networks/Transformers/Transformers.md | 4 ++-- AI/Properties.md | 8 ++++---- CS/Language Binding.md | 2 +- CS/Languages/dotNet.md | 2 +- Signal Proc/Image/Image Processing.md | 2 +- 18 files changed, 40 insertions(+), 40 deletions(-) diff --git a/AI/Neural Networks/CNN/Examples.md b/AI/Neural Networks/CNN/Examples.md index ff331fb..f61211b 100644 --- a/AI/Neural Networks/CNN/Examples.md +++ b/AI/Neural Networks/CNN/Examples.md @@ -8,7 +8,7 @@ # AlexNet 2012 -- [[Activation Functions#ReLu|ReLu]] +- [ReLu](../Activation%20Functions.md#ReLu) - Normalisation ![alexnet](../../../img/alexnet.png) @@ -29,13 +29,13 @@ 2015 - [Inception Layer](Inception%20Layer.md)s -- Multiple [[Deep Learning#Loss Function|Loss]] Functions +- Multiple [Loss](../Deep%20Learning.md#Loss%20Function) Functions ![googlenet](../../../img/googlenet.png) ## [Inception Layer](Inception%20Layer.md) ![googlenet-inception](../../../img/googlenet-inception.png) -## Auxiliary [[Deep Learning#Loss Function|Loss]] Functions +## Auxiliary [Loss](../Deep%20Learning.md#Loss%20Function) Functions - Two other SoftMax blocks - Help train really deep network - Vanishing gradient problem diff --git a/AI/Neural Networks/CNN/FCN/FCN.md b/AI/Neural Networks/CNN/FCN/FCN.md index 7c18751..083ecc8 100644 --- a/AI/Neural Networks/CNN/FCN/FCN.md +++ b/AI/Neural Networks/CNN/FCN/FCN.md @@ -1,6 +1,6 @@ Fully [Convolution](../../../../Signal%20Proc/Convolution.md)al Network -[[Convolutional Layer|Convolutional]] and [[UpConv|up-convolutional layers]] with [[Activation Functions#ReLu|ReLu]] but no others (pooling) +[Convolutional](../Convolutional%20Layer.md) and [up-convolutional layers](../UpConv.md) with [ReLu](../../Activation%20Functions.md#ReLu) but no others (pooling) - All some sort of Encoder-Decoder Contractive → [UpConv](../UpConv.md) @@ -19,13 +19,13 @@ Contractive → [UpConv](../UpConv.md) - Rarely from scratch - Pre-trained weights - Replace final layers - - [[MLP|FC]] layers + - [FC](../../MLP/MLP.md) layers - White-noise initialised - Add [UpConv](../UpConv.md) layer(s) - Fine-tune train - Freeze others - Annotated GT images -- Can use summed per-pixel log [[Deep Learning#Loss Function|loss]] +- Can use summed per-pixel log [loss](../../Deep%20Learning.md#Loss%20Function) # Evaluation ![fcn-eval](../../../../img/fcn-eval.png) diff --git a/AI/Neural Networks/CNN/FCN/ResNet.md b/AI/Neural Networks/CNN/FCN/ResNet.md index 1b90d0c..be364fa 100644 --- a/AI/Neural Networks/CNN/FCN/ResNet.md +++ b/AI/Neural Networks/CNN/FCN/ResNet.md @@ -12,18 +12,18 @@ # Design -- Skips across pairs of [[Convolutional Layer|conv layers]] +- Skips across pairs of [conv layers](../Convolutional%20Layer.md) - Elementwise addition - All layer 3x3 kernel - Spatial size halves each layer - Filters doubles each layer -- [[FCN|Fully convolutional]] +- [Fully convolutional](FCN.md) - No fc layer - - No [[Max Pooling|pooling]] + - No [pooling](../Max%20Pooling.md) - Except at end - No dropout -[[Datasets#ImageNet|ImageNet]] Error: +[ImageNet](../../CV/Datasets.md#ImageNet) Error: ![imagenet-error](../../../../img/imagenet-error.png) ![resnet-arch](../../../../img/resnet-arch.png) diff --git a/AI/Neural Networks/CNN/FCN/Skip Connections.md b/AI/Neural Networks/CNN/FCN/Skip Connections.md index 40c1654..ce5da51 100644 --- a/AI/Neural Networks/CNN/FCN/Skip Connections.md +++ b/AI/Neural Networks/CNN/FCN/Skip Connections.md @@ -1,4 +1,4 @@ -- Output of [[Convolutional Layer|conv]], c, layers are added to inputs of [UpConv](../UpConv.md), d, layers +- Output of [conv](../Convolutional%20Layer.md), c, layers are added to inputs of [UpConv](../UpConv.md), d, layers - Element-wise, not channel appending - Propagate high frequency information to later layers - Two types diff --git a/AI/Neural Networks/CNN/GAN/DC-GAN.md b/AI/Neural Networks/CNN/GAN/DC-GAN.md index b39c54e..a87152e 100644 --- a/AI/Neural Networks/CNN/GAN/DC-GAN.md +++ b/AI/Neural Networks/CNN/GAN/DC-GAN.md @@ -13,10 +13,10 @@ Deep [Convolutional](../../../../Signal%20Proc/Convolution.md) [GAN](GAN.md) - Discriminator - Contractive - Cross-entropy [loss](../../Deep%20Learning.md#Loss%20Function) - - [Conv](../Convolutional%20Layer.md) and leaky [[Activation Functions#ReLu|ReLu]] layers only - - Normalised output via [[Activation Functions#Sigmoid|sigmoid]] + - [Conv](../Convolutional%20Layer.md) and leaky [ReLu](../../Activation%20Functions.md#ReLu) layers only + - Normalised output via [sigmoid](../../Activation%20Functions.md#Sigmoid) -## [[Deep Learning#Loss Function|Loss]] +## [Loss](../../Deep%20Learning.md#Loss%20Function) $$D(S,L)=-\sum_iL_ilog(S_i)$$ - $S$ - $(0.1, 0.9)^T$ diff --git a/AI/Neural Networks/CNN/GAN/GAN.md b/AI/Neural Networks/CNN/GAN/GAN.md index fce9554..622fd0d 100644 --- a/AI/Neural Networks/CNN/GAN/GAN.md +++ b/AI/Neural Networks/CNN/GAN/GAN.md @@ -1,12 +1,12 @@ # Fully [Convolution](../../../../Signal%20Proc/Convolution.md)al - Remove [Max Pooling](../Max%20Pooling.md) - Use strided [UpConv](../UpConv.md) -- Remove [[MLP|FC]] layers +- Remove [FC](../../MLP/MLP.md) layers - Hurts convergence in non-classification - Normalisation tricks - Batch normalisation - Batches of 0 mean and variance 1 - - Leaky [[Activation Functions#ReLu|ReLu]] + - Leaky [ReLu](../../Activation%20Functions.md#ReLu) # Stages ## Generator, G @@ -27,5 +27,5 @@ # Code Vector Math for Control ![cvmfc](../../../../img/cvmfc.png) -- Do [[Interpretation#Activation Maximisation|AM]] to derive code for an image +- Do [AM](../Interpretation.md#Activation%20Maximisation) to derive code for an image ![code-vector-math-for-control-results](../../../../img/code-vector-math-for-control-results.png) \ No newline at end of file diff --git a/AI/Neural Networks/CNN/Interpretation.md b/AI/Neural Networks/CNN/Interpretation.md index 62b6994..5be7f8c 100644 --- a/AI/Neural Networks/CNN/Interpretation.md +++ b/AI/Neural Networks/CNN/Interpretation.md @@ -1,13 +1,13 @@ # Activation Maximisation - Synthesise an ideal image for a class - Maximise 1-hot output - - Maximise [[Activation Functions#SoftMax|SoftMax]] + - Maximise [SoftMax](../Activation%20Functions.md#SoftMax) ![am](../../../img/am.png) - **Use trained network** - Don't update weights -- [[Architectures|Feedforward]] noise - - [[Back-Propagation|Back-propagate]] [[Deep Learning#Loss Function|loss]] +- [Feedforward](../Architectures.md) noise + - [Back-propagate](../MLP/Back-Propagation.md) [loss](../Deep%20Learning.md#Loss%20Function) - Don't update weights - Update image @@ -17,7 +17,7 @@ - Prone to high frequency noise - Minimise - Total variation - - $x^*$ is the best solution to minimise [[Deep Learning#Loss Function|loss]] + - $x^*$ is the best solution to minimise [loss](../Deep%20Learning.md#Loss%20Function) $$x^*=\text{argmin}_{x\in \mathbb R^{H\times W\times C}}\mathcal l(\phi(x),\phi_0)$$ - Won't work diff --git a/AI/Neural Networks/CNN/UpConv.md b/AI/Neural Networks/CNN/UpConv.md index 9a18707..b812975 100644 --- a/AI/Neural Networks/CNN/UpConv.md +++ b/AI/Neural Networks/CNN/UpConv.md @@ -9,7 +9,7 @@ - Could specify kernel - Or learn - Can have multiple upconv layers - - Separated by [[Activation Functions#ReLu|ReLu]] + - Separated by [ReLu](../Activation%20Functions.md#ReLu) - For non-linear up-sampling conv - Interpolation is linear diff --git a/AI/Neural Networks/Deep Learning.md b/AI/Neural Networks/Deep Learning.md index 98ceb8e..b5d6ca1 100644 --- a/AI/Neural Networks/Deep Learning.md +++ b/AI/Neural Networks/Deep Learning.md @@ -8,7 +8,7 @@ Objective Function ![deep-loss-function](../../img/deep-loss-function.png) - Test accuracy worse than train accuracy = overfitting -- [[MLP|Dense]] = [[MLP|fully connected]] +- [Dense](MLP/MLP.md) = [fully connected](MLP/MLP.md) - Automates feature engineering ![ml-dl](../../img/ml-dl.png) diff --git a/AI/Neural Networks/MLP/MLP.md b/AI/Neural Networks/MLP/MLP.md index 6a9edf7..c918aa7 100644 --- a/AI/Neural Networks/MLP/MLP.md +++ b/AI/Neural Networks/MLP/MLP.md @@ -1,15 +1,15 @@ -- [[Architectures|Feedforward]] +- [Feedforward](../Architectures.md) - Single hidden layer can learn any function - Universal approximation theorem - Each hidden layer can operate as a different feature extraction layer -- Lots of [[Weight Init|weights]] to learn +- Lots of [weights](../Weight%20Init.md) to learn - [Back-Propagation](Back-Propagation.md) is supervised ![mlp-arch](../../../img/mlp-arch.png) # Universal Approximation Theory -A finite [[Architectures|feedforward]] MLP with 1 hidden layer can in theory approximate any mathematical function -- In practice not trainable with [[Back-Propagation|BP]] +A finite [feedforward](../Architectures.md) MLP with 1 hidden layer can in theory approximate any mathematical function +- In practice not trainable with [BP](Back-Propagation.md) ![activation-function](../../../img/activation-function.png) ![mlp-arch-diagram](../../../img/mlp-arch-diagram.png) @@ -19,4 +19,4 @@ A finite [[Architectures|feedforward]] MLP with 1 hidden layer can in theory app ![tlu](../../../img/tlu.png) - $o_1$ to $o_4$ must all be one to overcome -3.5 bias and force output to 1 ![mlp-non-linear-decision](../../../img/mlp-non-linear-decision.png) -- Can generate a non-linear [[Decision Boundary|decision boundary]] \ No newline at end of file +- Can generate a non-linear [decision boundary](Decision%20Boundary.md) \ No newline at end of file diff --git a/AI/Neural Networks/Properties+Capabilities.md b/AI/Neural Networks/Properties+Capabilities.md index 49f8835..3eae1b9 100644 --- a/AI/Neural Networks/Properties+Capabilities.md +++ b/AI/Neural Networks/Properties+Capabilities.md @@ -45,7 +45,7 @@ - Confidence value # Contextual Information -- [[Neural Networks#Knowledge|Knowledge]] represented by structure and activation weight +- [Knowledge](Neural%20Networks.md#Knowledge) represented by structure and activation weight - Any neuron can be affected by global activity - Contextual information handled naturally diff --git a/AI/Neural Networks/SLP/Least Mean Square.md b/AI/Neural Networks/SLP/Least Mean Square.md index 05ab0ea..5a9b774 100644 --- a/AI/Neural Networks/SLP/Least Mean Square.md +++ b/AI/Neural Networks/SLP/Least Mean Square.md @@ -20,7 +20,7 @@ $$\frac{\partial \mathfrak{E}(w)}{\partial w(n)}=-x(n)\cdot e(n)$$ $$\hat{g}(n)=-x(n)\cdot e(n)$$ $$\hat{w}(n+1)=\hat{w}(n)+\eta \cdot x(n) \cdot e(n)$$ -- Above is a [[Architectures|feedforward]] loop around weight vector, $\hat{w}$ +- Above is a [feedforward](../Architectures.md) loop around weight vector, $\hat{w}$ - Behaves like low-pass filter - Pass low frequency components of error signal - Average time constant of filtering action inversely proportional to learning-rate diff --git a/AI/Neural Networks/SLP/SLP.md b/AI/Neural Networks/SLP/SLP.md index 4efb7df..5dbdbba 100644 --- a/AI/Neural Networks/SLP/SLP.md +++ b/AI/Neural Networks/SLP/SLP.md @@ -4,4 +4,4 @@ $$=w^T(n)x(n)$$ ![slp-hyperplane](../../../img/slp-hyperplane.png) Perceptron learning is performed for a finite number of iteration and then stops -[[Least Mean Square|LMS]] is continuous learning that doesn't stop \ No newline at end of file +[LMS](Least%20Mean%20Square.md) is continuous learning that doesn't stop \ No newline at end of file diff --git a/AI/Neural Networks/Transformers/Transformers.md b/AI/Neural Networks/Transformers/Transformers.md index 681016d..89c29d1 100644 --- a/AI/Neural Networks/Transformers/Transformers.md +++ b/AI/Neural Networks/Transformers/Transformers.md @@ -1,4 +1,4 @@ -- [[Attention|Self-attention]] +- [Self-attention](Attention.md) - Weighting significance of parts of the input - Including recursive output - Similar to [RNN](../RNN/RNN.md)s @@ -35,5 +35,5 @@ - Uses incorporated textual information to produce output - Has attention to draw information from output of previous decoders before drawing from encoders - Both use [Attention](Attention.md) -- Both use [[MLP|dense]] layers for additional processing of outputs +- Both use [dense](../MLP/MLP.md) layers for additional processing of outputs - Contain residual connections & layer norm steps \ No newline at end of file diff --git a/AI/Properties.md b/AI/Properties.md index 5673b89..1a3ec75 100644 --- a/AI/Properties.md +++ b/AI/Properties.md @@ -1,7 +1,7 @@ # Three Key Components 1. Representation - - Declarative & Procedural [[Neural Networks#Knowledge|knowledge]] + - Declarative & Procedural [knowledge](Neural%20Networks/Neural%20Networks.md#Knowledge) - Typically human-readable symbols 2. Reasoning - Ability to solve problems @@ -36,13 +36,13 @@ Explanation-based learning uses both ## Level of Explanation - Classical has emphasis on building symbolic representations - Models cognition as sequential processing of symbolic representations -- [[Properties+Capabilities|Neural nets]] emphasis on parallel distributed processing models +- [Neural nets](Neural%20Networks/Properties+Capabilities.md) emphasis on parallel distributed processing models - Models assume information processing takes place through interactions of large numbers of neurons ## Processing style - Classical processing is sequential - Von Neumann Machine -- [[Properties+Capabilities|Neural nets]] use parallelism everywhere +- [Neural nets](Neural%20Networks/Properties+Capabilities.md) use parallelism everywhere - Source of flexibility - Robust @@ -50,7 +50,7 @@ Explanation-based learning uses both - Classical emphasises language of thought - Symbolic representation has quasi-linguistic structure - New symbols created from compositionality -- [[Properties+Capabilities|Neural nets]] have problem describing nature and structure of representation +- [Neural nets](Neural%20Networks/Properties+Capabilities.md) have problem describing nature and structure of representation Symbolic AI is the formal manipulation of a language of algorithms and data representations in a top-down fashion diff --git a/CS/Language Binding.md b/CS/Language Binding.md index e6d8f4c..6895cea 100644 --- a/CS/Language Binding.md +++ b/CS/Language Binding.md @@ -24,5 +24,5 @@ - Adobe Flash Player - Tamarin - JVM -- [[Compilers#LLVM|LLVM]] +- [LLVM](Compilers.md#LLVM) - Silverlight \ No newline at end of file diff --git a/CS/Languages/dotNet.md b/CS/Languages/dotNet.md index ef5c02a..bfe54ea 100644 --- a/CS/Languages/dotNet.md +++ b/CS/Languages/dotNet.md @@ -10,7 +10,7 @@ - JIT managed code into machine instructions - Execution engine - VM - - [[Language Binding#Virtual Machines]] + - [Language Binding](../Language%20Binding.md#Virtual%20Machines) - Services - Memory management - Type safety diff --git a/Signal Proc/Image/Image Processing.md b/Signal Proc/Image/Image Processing.md index f35607d..cb015ff 100644 --- a/Signal Proc/Image/Image Processing.md +++ b/Signal Proc/Image/Image Processing.md @@ -1 +1 @@ -[[Convolution#Discrete]] \ No newline at end of file +[Convolution](../Convolution.md#Discrete) \ No newline at end of file