vault backup: 2023-12-22 16:39:03
Affected files: .obsidian/community-plugins.json .obsidian/graph.json .obsidian/plugins/calendar/data.json .obsidian/plugins/calendar/main.js .obsidian/plugins/calendar/manifest.json .obsidian/plugins/dataview/main.js .obsidian/plugins/dataview/manifest.json .obsidian/plugins/dataview/styles.css .obsidian/workspace.json Events/Cardiff.md Events/November 27th Week.md Events/🪣🪣🪣.md Food/From Aldi.md Food/Meal Plans/Meals - 2023-06-18.md Food/Meal Plans/Meals - 2023-06-24.md Food/Meal Plans/Meals - 2023-07-30.md Food/Meal Plans/Meals - 2023-08-06.md Food/Meal Plans/Meals - 2023-08-13.md Food/Meal Plans/Meals - 2023-08-20.md Food/Meal Plans/Meals - 2023-08-27.md Food/Meal Plans/Meals - 2023-09-03.md Food/Meal Plans/Meals - 2023-09-10.md Food/Meal Plans/Meals - 2023-09-17.md Food/Meal Plans/Meals - 2023-09-25.md Food/Meal Plans/Meals - 2023-10-02.md Food/Meal Plans/Meals - 2023-10-14.md Food/Meal Plans/Meals - 2023-10-22.md Food/Meal Plans/Meals - 2023-10-30.md Food/Meal Plans/Meals - 2023-11-05.md Food/Meal Plans/Meals - 2023-11-14.md Food/Meal Plans/Meals - 2023-11-20.md Food/Meal Plans/Meals - 2023-12-03.md Food/Meal Plans/Meals - 2023-12-11.md Food/Meal Plans/Meals - 2023-12-16.md Food/Meals.md Food/Sauces.md Lab/DNS.md Lab/Deleted Packages.md Lab/Domains.md Lab/Ebook Laundering.md Lab/Home.md Lab/Linux/Alpine.md Lab/Linux/KDE.md Lab/Photo Migration.md Lab/VPN Servers.md Languages/Arabic.md Languages/Spanish/Spanish.md Languages/Spanish/Tenses.md Languages/Spanish/Verbs.md Money/Me/Accounts.md Money/Me/Car.md Money/Me/Home.md Money/Me/Income.md Money/Me/Monthly/23-04.md Money/Me/Monthly/23-05.md Money/Me/Monthly/23-06.md Money/Me/Monthly/23-07.md Money/Me/Monthly/23-08.md Money/Me/Monthly/23-09.md Money/Me/Monthly/23-10.md Money/Me/Monthly/23-11.md Money/Me/Monthly/23-12.md Money/Me/Subs.md STEM/AI/Classification/Classification.md STEM/AI/Classification/Decision Trees.md STEM/AI/Classification/Gradient Boosting Machine.md STEM/AI/Classification/Logistic Regression.md STEM/AI/Classification/Random Forest.md STEM/AI/Classification/Supervised/SVM.md STEM/AI/Classification/Supervised/Supervised.md STEM/AI/Ethics.md STEM/AI/Kalman Filter.md STEM/AI/Learning.md STEM/AI/Literature.md STEM/AI/Neural Networks/Activation Functions.md STEM/AI/Neural Networks/Architectures.md STEM/AI/Neural Networks/CNN/CNN.md STEM/AI/Neural Networks/CNN/Convolutional Layer.md STEM/AI/Neural Networks/CNN/Examples.md STEM/AI/Neural Networks/CNN/FCN/FCN.md STEM/AI/Neural Networks/CNN/FCN/FlowNet.md STEM/AI/Neural Networks/CNN/FCN/Highway Networks.md STEM/AI/Neural Networks/CNN/FCN/ResNet.md STEM/AI/Neural Networks/CNN/FCN/Skip Connections.md STEM/AI/Neural Networks/CNN/FCN/Super-Resolution.md STEM/AI/Neural Networks/CNN/GAN/CycleGAN.md STEM/AI/Neural Networks/CNN/GAN/DC-GAN.md STEM/AI/Neural Networks/CNN/GAN/GAN.md STEM/AI/Neural Networks/CNN/GAN/StackGAN.md STEM/AI/Neural Networks/CNN/GAN/cGAN.md STEM/AI/Neural Networks/CNN/Inception Layer.md STEM/AI/Neural Networks/CNN/Interpretation.md STEM/AI/Neural Networks/CNN/Max Pooling.md STEM/AI/Neural Networks/CNN/Normalisation.md STEM/AI/Neural Networks/CNN/UpConv.md STEM/AI/Neural Networks/CV/Data Manipulations.md STEM/AI/Neural Networks/CV/Datasets.md STEM/AI/Neural Networks/CV/Filters.md STEM/AI/Neural Networks/CV/Layer Structure.md STEM/AI/Neural Networks/CV/Visual Search/Visual Search.md STEM/AI/Neural Networks/Deep Learning.md STEM/AI/Neural Networks/Learning/Boltzmann.md STEM/AI/Neural Networks/Learning/Competitive Learning.md STEM/AI/Neural Networks/Learning/Credit-Assignment Problem.md STEM/AI/Neural Networks/Learning/Hebbian.md STEM/AI/Neural Networks/Learning/Learning.md STEM/AI/Neural Networks/Learning/Tasks.md STEM/AI/Neural Networks/MLP/Back-Propagation.md STEM/AI/Neural Networks/MLP/Decision Boundary.md STEM/AI/Neural Networks/MLP/MLP.md STEM/AI/Neural Networks/Neural Networks.md STEM/AI/Neural Networks/Properties+Capabilities.md STEM/AI/Neural Networks/RNN/Autoencoder.md STEM/AI/Neural Networks/RNN/Deep Image Prior.md STEM/AI/Neural Networks/RNN/LSTM.md STEM/AI/Neural Networks/RNN/MoCo.md STEM/AI/Neural Networks/RNN/RNN.md STEM/AI/Neural Networks/RNN/Representation Learning.md STEM/AI/Neural Networks/RNN/SimCLR.md STEM/AI/Neural Networks/RNN/VQA.md STEM/AI/Neural Networks/SLP/Least Mean Square.md STEM/AI/Neural Networks/SLP/Perceptron Convergence.md STEM/AI/Neural Networks/SLP/SLP.md STEM/AI/Neural Networks/Training.md STEM/AI/Neural Networks/Transformers/Attention.md STEM/AI/Neural Networks/Transformers/LLM.md STEM/AI/Neural Networks/Transformers/Transformers.md STEM/AI/Neural Networks/Weight Init.md STEM/AI/Pattern Matching/Dynamic Time Warping.md STEM/AI/Pattern Matching/Markov/Markov.md STEM/AI/Pattern Matching/Pattern Matching.md STEM/AI/Problem Solving.md STEM/AI/Properties.md STEM/AI/Searching/Informed.md STEM/AI/Searching/Searching.md STEM/AI/Searching/Uninformed.md STEM/CS/ABI.md STEM/CS/Calling Conventions.md STEM/CS/ISA.md STEM/CS/Languages/Assembly.md STEM/CS/Languages/Javascript.md STEM/CS/Languages/Python.md STEM/CS/Languages/Rust.md STEM/CS/Quantum.md STEM/CS/Resources.md STEM/IOT/Networking/Networking.md STEM/Light.md STEM/Quantum/Confinement.md STEM/Quantum/Orbitals.md STEM/Quantum/Schrödinger.md STEM/Quantum/Standard Model.md STEM/Quantum/Wave Function.md STEM/Speech/Linguistics/Consonants.md STEM/Speech/Linguistics/Language Structure.md STEM/Speech/Linguistics/Linguistics.md STEM/Speech/Linguistics/Terms.md STEM/Speech/Linguistics/Vowels.md STEM/Speech/Literature.md STEM/Speech/NLP/NLP.md STEM/Speech/NLP/Recognition.md STEM/Speech/Speech Processing/Applications.md Work/Possible Tasks.md Work/Tech.md
This commit is contained in:
parent
3fc7fa863e
commit
abbd7bba68
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
*Given an observation, determine one class from a set of classes that best explains the observation*
|
*Given an observation, determine one class from a set of classes that best explains the observation*
|
||||||
|
|
||||||
***Features are discrete or continuous***
|
***Features are discrete or continuous***
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Flowchart like design
|
- Flowchart like design
|
||||||
- Iterative decision making
|
- Iterative decision making
|
||||||
|
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Higher level take
|
- Higher level take
|
||||||
- Iteratively train more models addressing weak points
|
- Iteratively train more models addressing weak points
|
||||||
- Well paired with decision trees
|
- Well paired with decision trees
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
“hello world”
|
“hello world”
|
||||||
Related to naïve bayes
|
Related to naïve bayes
|
||||||
|
|
||||||
|
@ -1 +1,4 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
---
|
||||||
“Almost always the second best algorithm for any shallow ML task”
|
“Almost always the second best algorithm for any shallow ML task”
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
[Towards Data Science: SVM](https://towardsdatascience.com/support-vector-machines-svm-c9ef22815589)
|
[Towards Data Science: SVM](https://towardsdatascience.com/support-vector-machines-svm-c9ef22815589)
|
||||||
[Towards Data Science: SVM an overview](https://towardsdatascience.com/https-medium-com-pupalerushikesh-svm-f4b42800e989)
|
[Towards Data Science: SVM an overview](https://towardsdatascience.com/https-medium-com-pupalerushikesh-svm-f4b42800e989)
|
||||||
|
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
|
|
||||||
# Gaussian Classifier
|
# Gaussian Classifier
|
||||||
- With $T$ labelled data
|
- With $T$ labelled data
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Fair
|
# Fair
|
||||||
- Democracy
|
- Democracy
|
||||||
- Board-level
|
- Board-level
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Measure
|
- Measure
|
||||||
- Predict
|
- Predict
|
||||||
- Update
|
- Update
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Supervised
|
# Supervised
|
||||||
- Dataset with inputs manually annotated for desired output
|
- Dataset with inputs manually annotated for desired output
|
||||||
- Desired output = supervisory signal
|
- Desired output = supervisory signal
|
||||||
|
@ -1,4 +1,8 @@
|
|||||||
#lit
|
---
|
||||||
|
tags:
|
||||||
|
- lit
|
||||||
|
- ai
|
||||||
|
---
|
||||||
[https://web.stanford.edu/~jurafsky/slp3/A.pdf](https://web.stanford.edu/~jurafsky/slp3/A.pdf)
|
[https://web.stanford.edu/~jurafsky/slp3/A.pdf](https://web.stanford.edu/~jurafsky/slp3/A.pdf)
|
||||||
[Towards Data Science: 3 Things You Need To Know Before You Train-Test Split](https://towardsdatascience.com/3-things-you-need-to-know-before-you-train-test-split-869dfabb7e50)
|
[Towards Data Science: 3 Things You Need To Know Before You Train-Test Split](https://towardsdatascience.com/3-things-you-need-to-know-before-you-train-test-split-869dfabb7e50)
|
||||||
[train-final-machine-learning-model](https://machinelearningmastery.com/train-final-machine-learning-model/)
|
[train-final-machine-learning-model](https://machinelearningmastery.com/train-final-machine-learning-model/)
|
||||||
|
@ -1,3 +1,6 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
---
|
||||||
- Limits output values
|
- Limits output values
|
||||||
- Squashing function
|
- Squashing function
|
||||||
|
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Single-Layer Feedforward
|
# Single-Layer Feedforward
|
||||||
- *Acyclic*
|
- *Acyclic*
|
||||||
- Count output layer, no computation at input
|
- Count output layer, no computation at input
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
## Before 2010s
|
## Before 2010s
|
||||||
- Data hungry
|
- Data hungry
|
||||||
- Need lots of training data
|
- Need lots of training data
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
|
|
||||||
## Design Parameters
|
## Design Parameters
|
||||||
- Size of input image
|
- Size of input image
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# LeNet
|
# LeNet
|
||||||
- 1990's
|
- 1990's
|
||||||
![lenet-1989](../../../img/lenet-1989.png)
|
![lenet-1989](../../../img/lenet-1989.png)
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
Fully [Convolution](../../../../Signal%20Proc/Convolution.md)al Network
|
Fully [Convolution](../../../../Signal%20Proc/Convolution.md)al Network
|
||||||
|
|
||||||
[Convolutional](../Convolutional%20Layer.md) and [up-convolutional layers](../UpConv.md) with [ReLu](../../Activation%20Functions.md#ReLu) but no others (pooling)
|
[Convolutional](../Convolutional%20Layer.md) and [up-convolutional layers](../UpConv.md) with [ReLu](../../Activation%20Functions.md#ReLu) but no others (pooling)
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
Optical Flow
|
Optical Flow
|
||||||
|
|
||||||
- 2-Channel optical flow
|
- 2-Channel optical flow
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- [Skip Connections](Skip%20Connections.md) across individual layers
|
- [Skip Connections](Skip%20Connections.md) across individual layers
|
||||||
- Conditionally
|
- Conditionally
|
||||||
- Soft gates
|
- Soft gates
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Residual networks
|
- Residual networks
|
||||||
- 152 layers
|
- 152 layers
|
||||||
- Skips every two layers
|
- Skips every two layers
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Output of [conv](../Convolutional%20Layer.md), c, layers are added to inputs of [UpConv](../UpConv.md), d, layers
|
- Output of [conv](../Convolutional%20Layer.md), c, layers are added to inputs of [UpConv](../UpConv.md), d, layers
|
||||||
- Element-wise, not channel appending
|
- Element-wise, not channel appending
|
||||||
- Propagate high frequency information to later layers
|
- Propagate high frequency information to later layers
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
- Auto-encoders
|
- Auto-encoders
|
||||||
- Get same image back
|
- Get same image back
|
||||||
- Up-sample blurry small image classically
|
- Up-sample blurry small image classically
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
Cycle Consistent [GAN](GAN.md)
|
Cycle Consistent [GAN](GAN.md)
|
||||||
|
|
||||||
- G
|
- G
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
Deep [Convolutional](../../../../Signal%20Proc/Convolution.md) [GAN](GAN.md)
|
Deep [Convolutional](../../../../Signal%20Proc/Convolution.md) [GAN](GAN.md)
|
||||||
![dc-gan](../../../../img/dc-gan.png)
|
![dc-gan](../../../../img/dc-gan.png)
|
||||||
|
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
# Fully [Convolution](../../../../Signal%20Proc/Convolution.md)al
|
# Fully [Convolution](../../../../Signal%20Proc/Convolution.md)al
|
||||||
- Remove [Max Pooling](../Max%20Pooling.md)
|
- Remove [Max Pooling](../Max%20Pooling.md)
|
||||||
- Use strided [UpConv](../UpConv.md)
|
- Use strided [UpConv](../UpConv.md)
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
- Feed output from synthesis into up-res network
|
- Feed output from synthesis into up-res network
|
||||||
- Generate standard low-res image
|
- Generate standard low-res image
|
||||||
- Feed into [cGAN](cGAN.md)
|
- Feed into [cGAN](cGAN.md)
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
Conditional [GAN](GAN.md)
|
Conditional [GAN](GAN.md)
|
||||||
|
|
||||||
- Hard to control with [AM](../Interpretation.md#Activation%20Maximisation)
|
- Hard to control with [AM](../Interpretation.md#Activation%20Maximisation)
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Similar to band-pass pyramid
|
- Similar to band-pass pyramid
|
||||||
- Changes fixed scale window sizes
|
- Changes fixed scale window sizes
|
||||||
- Couple of different scales
|
- Couple of different scales
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
# Activation Maximisation
|
# Activation Maximisation
|
||||||
- Synthesise an ideal image for a class
|
- Synthesise an ideal image for a class
|
||||||
- Maximise 1-hot output
|
- Maximise 1-hot output
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Maximum within window and writes result to output
|
- Maximum within window and writes result to output
|
||||||
- Downsamples image
|
- Downsamples image
|
||||||
- More non-linearity
|
- More non-linearity
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- To keep sensible layer by layer
|
- To keep sensible layer by layer
|
||||||
- Apply kernel to same location of all channels
|
- Apply kernel to same location of all channels
|
||||||
- Pixels in window divided by sum of pixel within volume across channels
|
- Pixels in window divided by sum of pixel within volume across channels
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Fractionally strided convolution
|
- Fractionally strided convolution
|
||||||
- Transposed [Convolution](../../../Signal%20Proc/Convolution.md)
|
- Transposed [Convolution](../../../Signal%20Proc/Convolution.md)
|
||||||
- Like a deep interpolation
|
- Like a deep interpolation
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
# Augmentation
|
# Augmentation
|
||||||
- Mimic larger datasets
|
- Mimic larger datasets
|
||||||
- Help with over-fitting
|
- Help with over-fitting
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
# MNIST
|
# MNIST
|
||||||
- 70,000 hand-drawn characters from US mail
|
- 70,000 hand-drawn characters from US mail
|
||||||
- 28x28 images
|
- 28x28 images
|
||||||
|
@ -1,2 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
# Gabor
|
# Gabor
|
||||||
![gabor](../../../img/gabor.png)
|
![gabor](../../../img/gabor.png)
|
@ -1 +1,5 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
![cnn-cv-layer-arch](../../../img/cnn-cv-layer-arch.png)
|
![cnn-cv-layer-arch](../../../img/cnn-cv-layer-arch.png)
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
- Shallow would be BOVW
|
- Shallow would be BOVW
|
||||||
- Use metric space over feature space
|
- Use metric space over feature space
|
||||||
- Get ranked list
|
- Get ranked list
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
![deep-digit-classification](../../img/deep-digit-classification.png)
|
![deep-digit-classification](../../img/deep-digit-classification.png)
|
||||||
|
|
||||||
OCR [Classification](../Classification/Classification.md)
|
OCR [Classification](../Classification/Classification.md)
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Stochastic
|
- Stochastic
|
||||||
- Recurrent structure
|
- Recurrent structure
|
||||||
- Binary operation (+/- 1)
|
- Binary operation (+/- 1)
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Only single output neuron fires
|
- Only single output neuron fires
|
||||||
|
|
||||||
1. Set of homogeneous neurons with some randomly distributed synaptic weights
|
1. Set of homogeneous neurons with some randomly distributed synaptic weights
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Assigning credit/blame for outcomes to each internal decision
|
- Assigning credit/blame for outcomes to each internal decision
|
||||||
- Loading Problem
|
- Loading Problem
|
||||||
- Loading a training set into the free parameters
|
- Loading a training set into the free parameters
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
*Time-dependent, highly local, strongly interactive*
|
*Time-dependent, highly local, strongly interactive*
|
||||||
|
|
||||||
- Oldest learning algorithm
|
- Oldest learning algorithm
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
*Learning is a process by which the free parameters of a neural network are adapted through a process of stimulation by the environment in which the network is embedded. The type of learning is determined by the manner in which the parameter changes take place*
|
*Learning is a process by which the free parameters of a neural network are adapted through a process of stimulation by the environment in which the network is embedded. The type of learning is determined by the manner in which the parameter changes take place*
|
||||||
|
|
||||||
1. The neural network is **stimulated** by an environment
|
1. The neural network is **stimulated** by an environment
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Pattern Association
|
# Pattern Association
|
||||||
- Associative memory
|
- Associative memory
|
||||||
- Learns by association
|
- Learns by association
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
Error signal graph
|
Error signal graph
|
||||||
|
|
||||||
![mlp-arch-graph](../../../img/mlp-arch-graph.png)
|
![mlp-arch-graph](../../../img/mlp-arch-graph.png)
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
![hidden-neuron-decision](../../../img/hidden-neuron-decision.png)
|
![hidden-neuron-decision](../../../img/hidden-neuron-decision.png)
|
||||||
![mlp-xor](../../../img/mlp-xor.png)
|
![mlp-xor](../../../img/mlp-xor.png)
|
||||||
![mlp-xor-2](../../../img/mlp-xor-2.png)
|
![mlp-xor-2](../../../img/mlp-xor-2.png)
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- [Feedforward](../Architectures.md)
|
- [Feedforward](../Architectures.md)
|
||||||
- Single hidden layer can learn any function
|
- Single hidden layer can learn any function
|
||||||
- Universal approximation theorem
|
- Universal approximation theorem
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Massively parallel, distributed processor
|
- Massively parallel, distributed processor
|
||||||
- Natural propensity for storing experiential knowledge
|
- Natural propensity for storing experiential knowledge
|
||||||
|
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Linearity
|
# Linearity
|
||||||
- Neurons can be linear or non-linear
|
- Neurons can be linear or non-linear
|
||||||
- Network of non-linear neurons is non-linear
|
- Network of non-linear neurons is non-linear
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
- Sequence of strokes for sketching
|
- Sequence of strokes for sketching
|
||||||
- LSTM backbone
|
- LSTM backbone
|
||||||
|
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
- Overfitted to image
|
- Overfitted to image
|
||||||
- Learn weights necessary to reconstruct from white noise
|
- Learn weights necessary to reconstruct from white noise
|
||||||
- Trained from scratch on single image
|
- Trained from scratch on single image
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
Long Short Term Memory
|
Long Short Term Memory
|
||||||
|
|
||||||
- More general form of [RNN](RNN.md)
|
- More general form of [RNN](RNN.md)
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Similar to SimCLR
|
- Similar to SimCLR
|
||||||
- Rich set of negatives
|
- Rich set of negatives
|
||||||
- Sampled from previous epochs in queue
|
- Sampled from previous epochs in queue
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
Recurrent Neural Network
|
Recurrent Neural Network
|
||||||
|
|
||||||
- Hard to train on long sequences
|
- Hard to train on long sequences
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
# [Un-Supervised](../../Learning.md#Un-Supervised)
|
# [Un-Supervised](../../Learning.md#Un-Supervised)
|
||||||
|
|
||||||
- Auto-encoder FCN
|
- Auto-encoder FCN
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
1. Data augmentation
|
1. Data augmentation
|
||||||
- Crop patches from images in batch
|
- Crop patches from images in batch
|
||||||
- Add colour jitter
|
- Add colour jitter
|
||||||
|
@ -1,3 +1,8 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
- media
|
||||||
|
---
|
||||||
Visual Question Answering
|
Visual Question Answering
|
||||||
|
|
||||||
- Combine visual with text sequence
|
- Combine visual with text sequence
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- To handle overlapping classes
|
- To handle overlapping classes
|
||||||
- Linearity condition remains
|
- Linearity condition remains
|
||||||
- Linear boundary
|
- Linear boundary
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
Error-Correcting Perceptron Learning
|
Error-Correcting Perceptron Learning
|
||||||
|
|
||||||
- Uses a McCulloch-Pitt neuron
|
- Uses a McCulloch-Pitt neuron
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
![slp-arch](../../../img/slp-arch.png)
|
![slp-arch](../../../img/slp-arch.png)
|
||||||
$$v(n)=\sum_{i=0}^{m}w_i(n)x_i(n)$$
|
$$v(n)=\sum_{i=0}^{m}w_i(n)x_i(n)$$
|
||||||
$$=w^T(n)x(n)$$
|
$$=w^T(n)x(n)$$
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Modes
|
# Modes
|
||||||
## Sequential
|
## Sequential
|
||||||
- Apply changes after each train pattern
|
- Apply changes after each train pattern
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Meant to mimic cognitive attention
|
- Meant to mimic cognitive attention
|
||||||
- Picks out relevant bits of information
|
- Picks out relevant bits of information
|
||||||
- Use gradient descent
|
- Use gradient descent
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Properties
|
# Properties
|
||||||
## Pre-training Datasets
|
## Pre-training Datasets
|
||||||
|
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- [Self-attention](Attention.md)
|
- [Self-attention](Attention.md)
|
||||||
- Weighting significance of parts of the input
|
- Weighting significance of parts of the input
|
||||||
- Including recursive output
|
- Including recursive output
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
- Randomly
|
- Randomly
|
||||||
- Gaussian noise with mean = 0
|
- Gaussian noise with mean = 0
|
||||||
- Small network
|
- Small network
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
***Deterministic
|
***Deterministic
|
||||||
Pattern Recogniser***
|
Pattern Recogniser***
|
||||||
Allows timescale variations in sequences for same class
|
Allows timescale variations in sequences for same class
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
[Hidden Markov Models - JWMI Github](https://jwmi.github.io/ASM/5-HMMs.pdf)
|
[Hidden Markov Models - JWMI Github](https://jwmi.github.io/ASM/5-HMMs.pdf)
|
||||||
[Rabiner - A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition](https://www.cs.cmu.edu/~cga/behavior/rabiner1.pdf)
|
[Rabiner - A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition](https://www.cs.cmu.edu/~cga/behavior/rabiner1.pdf)
|
||||||
|
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
Structured sequence of observations
|
Structured sequence of observations
|
||||||
|
|
||||||
- [Dynamic Time Warping](Dynamic%20Time%20Warping.md)
|
- [Dynamic Time Warping](Dynamic%20Time%20Warping.md)
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Problem Types
|
# Problem Types
|
||||||
- Toy/game problems
|
- Toy/game problems
|
||||||
- Illustrative
|
- Illustrative
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Three Key Components
|
# Three Key Components
|
||||||
|
|
||||||
1. Representation
|
1. Representation
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Best First
|
# Best First
|
||||||
|
|
||||||
- Uniform cost uses an evaluation function
|
- Uniform cost uses an evaluation function
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# [Uninformed](Uninformed.md)
|
# [Uninformed](Uninformed.md)
|
||||||
- Breadth First
|
- Breadth First
|
||||||
- Uniform Cost
|
- Uniform Cost
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Breadth First
|
# Breadth First
|
||||||
- Uniform cost with cost function proportional to depth
|
- Uniform cost with cost function proportional to depth
|
||||||
- All of each layer
|
- All of each layer
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- low-level
|
||||||
|
---
|
||||||
- How data structures & computational routines are accessed in machine code ([Code Types](Code%20Types.md))
|
- How data structures & computational routines are accessed in machine code ([Code Types](Code%20Types.md))
|
||||||
- Machine code therefore hardware-dependent
|
- Machine code therefore hardware-dependent
|
||||||
- API defines this structure in source code
|
- API defines this structure in source code
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- low-level
|
||||||
|
---
|
||||||
- The order in which atomic (scalar) parameters, or individual parts of a complex parameter, are allocated
|
- The order in which atomic (scalar) parameters, or individual parts of a complex parameter, are allocated
|
||||||
- How parameters are passed
|
- How parameters are passed
|
||||||
- Pushed on the stack, placed in registers, or a mix of both
|
- Pushed on the stack, placed in registers, or a mix of both
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- low-level
|
||||||
|
---
|
||||||
Instruction Set Architecture
|
Instruction Set Architecture
|
||||||
|
|
||||||
___Not Microarchitecture___
|
___Not Microarchitecture___
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- low-level
|
||||||
|
---
|
||||||
[Uni of Virginia - x86 Assembly Guide](https://www.cs.virginia.edu/~evans/cs216/guides/x86.html)
|
[Uni of Virginia - x86 Assembly Guide](https://www.cs.virginia.edu/~evans/cs216/guides/x86.html)
|
||||||
|
|
||||||
## x86 32-bit
|
## x86 32-bit
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- web
|
||||||
|
---
|
||||||
[https://www.learnui.design/blog/spice-up-designs.html](https://www.learnui.design/blog/spice-up-designs.html)
|
[https://www.learnui.design/blog/spice-up-designs.html](https://www.learnui.design/blog/spice-up-designs.html)
|
||||||
|
|
||||||
# Modules
|
# Modules
|
||||||
|
@ -4,7 +4,7 @@ From <[https://www.activestate.com/resources/quick-reads/how-to-update-all-pytho
|
|||||||
|
|
||||||
[poetry cheat sheet](https://gist.github.com/CarlosDomingues/b88df15749af23a463148bd2c2b9b3fb)
|
[poetry cheat sheet](https://gist.github.com/CarlosDomingues/b88df15749af23a463148bd2c2b9b3fb)
|
||||||
|
|
||||||
## Twisted
|
## Twisted #net
|
||||||
Network engine
|
Network engine
|
||||||
|
|
||||||
numpy scipy jupyterlab matplotlib pandas scikit-learn
|
numpy scipy jupyterlab matplotlib pandas scikit-learn
|
||||||
@ -15,7 +15,7 @@ Compiler
|
|||||||
## Plotly
|
## Plotly
|
||||||
Publication-quality Graphs
|
Publication-quality Graphs
|
||||||
|
|
||||||
[NLTK](https://www.nltk.org)
|
[NLTK](https://www.nltk.org) #ai
|
||||||
[NLP](../../Speech/NLP/NLP.md)
|
[NLP](../../Speech/NLP/NLP.md)
|
||||||
|
|
||||||
If you are not getting good results, you should first check that you are using the right [classification](../../AI/Classification/Classification.md) algorithm (is your data well fit to be classified by a linear SVM?) and that you have enough training data. Practically, that means you might consider visualizing your dataset through PCA or t-SNE to see how "clustered" your classes are, and checking how your classification metrics evolve with the amount of data your classifier is given.
|
If you are not getting good results, you should first check that you are using the right [classification](../../AI/Classification/Classification.md) algorithm (is your data well fit to be classified by a linear SVM?) and that you have enough training data. Practically, that means you might consider visualizing your dataset through PCA or t-SNE to see how "clustered" your classes are, and checking how your classification metrics evolve with the amount of data your classifier is given.
|
||||||
|
@ -1,4 +1,8 @@
|
|||||||
## Web
|
---
|
||||||
|
tags:
|
||||||
|
- low-level
|
||||||
|
---
|
||||||
|
## #web
|
||||||
|
|
||||||
#### Backend
|
#### Backend
|
||||||
- Actix-web
|
- Actix-web
|
||||||
|
@ -1,2 +1,6 @@
|
|||||||
#lit
|
---
|
||||||
|
tags:
|
||||||
|
- lit
|
||||||
|
- quantum
|
||||||
|
---
|
||||||
[5 books](https://fivebooks.com/best-books/quantum-computing-chris-bernhardt/)
|
[5 books](https://fivebooks.com/best-books/quantum-computing-chris-bernhardt/)
|
@ -1,4 +1,7 @@
|
|||||||
#lit
|
---
|
||||||
|
tags:
|
||||||
|
- lit
|
||||||
|
---
|
||||||
[Wigle - wifi enumerating](http://wigle.net)
|
[Wigle - wifi enumerating](http://wigle.net)
|
||||||
|
|
||||||
[0xAX/linux-insides book](https://github.com/0xAX/linux-insides)
|
[0xAX/linux-insides book](https://github.com/0xAX/linux-insides)
|
||||||
|
@ -1,4 +1,7 @@
|
|||||||
#net
|
---
|
||||||
|
tags:
|
||||||
|
- net
|
||||||
|
---
|
||||||
![](../../img/iot-network-types.png)
|
![](../../img/iot-network-types.png)
|
||||||
|
|
||||||
# Gateway
|
# Gateway
|
||||||
|
4
Light.md
4
Light.md
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- quantum
|
||||||
|
---
|
||||||
$$E=hf$$
|
$$E=hf$$
|
||||||
## Photoelectric Effect
|
## Photoelectric Effect
|
||||||
|
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- quantum
|
||||||
|
---
|
||||||
$$E_{ne}=\frac{\hbar^2}{2m_e^*} \frac{\pi^2}{L^2} n^2$$
|
$$E_{ne}=\frac{\hbar^2}{2m_e^*} \frac{\pi^2}{L^2} n^2$$
|
||||||
![quantum-confinement](../img/quantum-confinement.png)
|
![quantum-confinement](../img/quantum-confinement.png)
|
||||||
![confinement-band-gaps](../img/confinement-band-gaps.png)
|
![confinement-band-gaps](../img/confinement-band-gaps.png)
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- quantum
|
||||||
|
---
|
||||||
[Wave Function](Wave%20Function.md)
|
[Wave Function](Wave%20Function.md)
|
||||||
|
|
||||||
## Quantum Numbers
|
## Quantum Numbers
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- quantum
|
||||||
|
---
|
||||||
$$-\frac{\hbar^2}{2m}\nabla^2\psi+V\psi=E\psi$$
|
$$-\frac{\hbar^2}{2m}\nabla^2\psi+V\psi=E\psi$$
|
||||||
- Time Independent
|
- Time Independent
|
||||||
- $\psi$ is the [Wave Function](Wave%20Function.md)
|
- $\psi$ is the [Wave Function](Wave%20Function.md)
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- quantum
|
||||||
|
---
|
||||||
![model-table](../img/model-table.png)
|
![model-table](../img/model-table.png)
|
||||||
- 4 fundamental forces
|
- 4 fundamental forces
|
||||||
- Bosons
|
- Bosons
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- quantum
|
||||||
|
---
|
||||||
$$\psi(r,\theta,\phi)=R(r)\cdot Y_{ml}(\theta, \phi)$$
|
$$\psi(r,\theta,\phi)=R(r)\cdot Y_{ml}(\theta, \phi)$$
|
||||||
Wave functions are products of
|
Wave functions are products of
|
||||||
Radial Function
|
Radial Function
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- linguistics
|
||||||
|
---
|
||||||
- Complete or partial closure of vocal tract
|
- Complete or partial closure of vocal tract
|
||||||
- Voiced/Unvoiced
|
- Voiced/Unvoiced
|
||||||
|
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- linguistics
|
||||||
|
---
|
||||||
- Sentence
|
- Sentence
|
||||||
- **Syntax**
|
- **Syntax**
|
||||||
- Words
|
- Words
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- linguistics
|
||||||
|
---
|
||||||
- Phonetics
|
- Phonetics
|
||||||
- Sound of language
|
- Sound of language
|
||||||
- Acoustic result of speech articulation
|
- Acoustic result of speech articulation
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- linguistics
|
||||||
|
---
|
||||||
# Phoneme
|
# Phoneme
|
||||||
- Smallest unit of speech
|
- Smallest unit of speech
|
||||||
- Distinguish words
|
- Distinguish words
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- linguistics
|
||||||
|
---
|
||||||
ALL VOICED
|
ALL VOICED
|
||||||
[Wikipedia - Vowel Sounds](https://en.wikipedia.org/wiki/Vowel#Audio_samples)
|
[Wikipedia - Vowel Sounds](https://en.wikipedia.org/wiki/Vowel#Audio_samples)
|
||||||
|
|
||||||
|
@ -1,4 +1,7 @@
|
|||||||
#lit
|
---
|
||||||
|
tags:
|
||||||
|
- lit
|
||||||
|
---
|
||||||
Daniel Jurafsky
|
Daniel Jurafsky
|
||||||
James H. Martin
|
James H. Martin
|
||||||
|
|
||||||
|
@ -1,4 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
# Text Normalisation
|
# Text Normalisation
|
||||||
- Tokenisation
|
- Tokenisation
|
||||||
- Labelling parts of sentence
|
- Labelling parts of sentence
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- ai
|
||||||
|
---
|
||||||
1. Automatic Speech Recognition
|
1. Automatic Speech Recognition
|
||||||
- Spoken words to machine-readable form
|
- Spoken words to machine-readable form
|
||||||
2. Natural language understanding
|
2. Natural language understanding
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- media
|
||||||
|
---
|
||||||
- Speech telecommunications & Encoding
|
- Speech telecommunications & Encoding
|
||||||
- Preserving perceptibility and quality over the wire
|
- Preserving perceptibility and quality over the wire
|
||||||
- Minimising bandwidth
|
- Minimising bandwidth
|
||||||
|
Loading…
Reference in New Issue
Block a user