stem/AI/Neural Networks/CNN/GAN/DC-GAN.md
andy 33ac3007bc vault backup: 2023-05-27 22:17:56
Affected files:
.obsidian/graph.json
.obsidian/workspace-mobile.json
.obsidian/workspace.json
STEM/AI/Neural Networks/Activation Functions.md
STEM/AI/Neural Networks/CNN/FCN/FlowNet.md
STEM/AI/Neural Networks/CNN/FCN/ResNet.md
STEM/AI/Neural Networks/CNN/FCN/Skip Connections.md
STEM/AI/Neural Networks/CNN/GAN/DC-GAN.md
STEM/AI/Neural Networks/CNN/GAN/GAN.md
STEM/AI/Neural Networks/CNN/Interpretation.md
STEM/AI/Neural Networks/Deep Learning.md
STEM/AI/Neural Networks/MLP/Back-Propagation.md
STEM/AI/Neural Networks/MLP/MLP.md
STEM/AI/Neural Networks/Transformers/Attention.md
STEM/CS/ABI.md
STEM/CS/Calling Conventions.md
STEM/CS/Code Types.md
STEM/CS/Language Binding.md
STEM/img/am-regulariser.png
STEM/img/skip-connections.png
2023-05-27 22:17:56 +01:00

1.7 KiB

Deep Convolutional GAN !dc-gan.png

Deep Learning#Loss Function

D(S,L)=-\sum_iL_ilog(S_i)
  • S
    • (0.1, 0.9)^T
    • Score generated by discriminator
  • L
    • (1, 0)^T
    • One-hot label vector
    • Step 1
      • Depends on choice of real/fake
    • Step 2
      • One-hot fake vector
  • \sum_i
    • Sum over all images in mini-batch
Noise Image
z x
  • Generator wants
    • D(G(z))=1
    • Wants to fool discriminator
  • Discriminator wants
    • D(G(z))=0
    • Wants to correctly catch generator
  • Real data wants
    • D(x)=1
J^{(D)}=-\frac 1 2 \mathbb E_{x\sim p_{data}}\log D(x)-\frac 1 2 \mathbb E_z\log (1-D(G(z)))
J^{(G)}=-J^{(D)}
  • First term for real images
  • Second term for fake images

Mode Collapse

  • Generator gives easy solution
  • Learns one image for most noise that will fool discriminator
  • Mitigate by minibatch discriminator
    • Match G(z) distribution to x

What is Learnt?

  • Encoding texture/patch detail from training set
    • Similar to FCN
    • Reproducing texture at high level
    • Cues triggered by code vector
      • Input random noise
  • Iteratively improves visual feasibility
    • Different to FCN
  • Discriminator is a task specific classifier
  • Difficult to train over diverse footage
    • Mixing concepts doesn't work
    • Single category/class