Andy Pack
efa7a84a8b
Affected files: .obsidian/graph.json .obsidian/workspace-mobile.json .obsidian/workspace.json Languages/Spanish/Spanish.md STEM/AI/Classification/Classification.md STEM/AI/Classification/Decision Trees.md STEM/AI/Classification/Logistic Regression.md STEM/AI/Classification/Random Forest.md STEM/AI/Classification/Supervised/SVM.md STEM/AI/Classification/Supervised/Supervised.md STEM/AI/Neural Networks/Activation Functions.md STEM/AI/Neural Networks/CNN/CNN.md STEM/AI/Neural Networks/CNN/GAN/DC-GAN.md STEM/AI/Neural Networks/CNN/GAN/GAN.md STEM/AI/Neural Networks/Deep Learning.md STEM/AI/Neural Networks/Properties+Capabilities.md STEM/AI/Neural Networks/SLP/Perceptron Convergence.md
2.0 KiB
2.0 KiB
tags | ||
---|---|---|
|
Deep Convolutional GAN
- Generator
- Discriminator
Loss
D(S,L)=-\sum_iL_ilog(S_i)
S
(0.1, 0.9)^T
- Score generated by discriminator
L
(1, 0)^T
- One-hot label vector
- Step 1
- Depends on choice of real/fake
- Step 2
- One-hot fake vector
\sum_i
- Sum over all images in mini-batch
Noise | Image |
---|---|
z |
x |
- Generator wants
D(G(z))=1
- Wants to fool discriminator
- Discriminator wants
D(G(z))=0
- Wants to correctly catch generator
- Real data wants
D(x)=1
J^{(D)}=-\frac 1 2 \mathbb E_{x\sim p_{data}}\log D(x)-\frac 1 2 \mathbb E_z\log (1-D(G(z)))
J^{(G)}=-J^{(D)}
- First term for real images
- Second term for fake images
Mode Collapse
- Generator gives easy solution
- Learns one image for most noise that will fool discriminator
- Mitigate by minibatch discriminator
- Match G(z) distribution to x
What is Learnt?
- Encoding texture/patch detail from training set
- Similar to FCN
- Reproducing texture at high level
- Cues triggered by code vector
- Input random noise
- Iteratively improves visual feasibility
- Different to FCN
- Discriminator is a task specific classifier #classification
- Difficult to train over diverse footage
- Mixing concepts doesn't work
- Single category/class