2023-12-22 16:39:03 +00:00
|
|
|
---
|
|
|
|
tags:
|
|
|
|
- ai
|
|
|
|
- media
|
2023-12-27 22:45:35 +00:00
|
|
|
- art
|
2023-12-22 16:39:03 +00:00
|
|
|
---
|
2023-05-31 22:51:45 +01:00
|
|
|
Deep [Convolutional](../../../../Signal%20Proc/Convolution.md) [GAN](GAN.md)
|
|
|
|
![dc-gan](../../../../img/dc-gan.png)
|
2023-05-26 18:29:17 +01:00
|
|
|
|
|
|
|
- Generator
|
2023-05-31 22:51:45 +01:00
|
|
|
- [FCN](../FCN/FCN.md)
|
2023-05-26 18:29:17 +01:00
|
|
|
- Decoder
|
|
|
|
- Generate image from code
|
|
|
|
- Low-dimensional
|
|
|
|
- ~100-D
|
2023-05-31 22:51:45 +01:00
|
|
|
- Reshape to [tensor](../../../../Maths/Tensor.md)
|
|
|
|
- [UpConv](../UpConv.md) to image
|
2023-05-26 18:29:17 +01:00
|
|
|
- Train using Gaussian random noise for code
|
|
|
|
- Discriminator
|
|
|
|
- Contractive
|
2023-06-05 17:01:29 +01:00
|
|
|
- Cross-entropy [loss](../../Deep%20Learning.md#Loss%20Function)
|
2023-06-06 11:48:49 +01:00
|
|
|
- [Conv](../Convolutional%20Layer.md) and leaky [ReLu](../../Activation%20Functions.md#ReLu) layers only
|
|
|
|
- Normalised output via [sigmoid](../../Activation%20Functions.md#Sigmoid)
|
2023-05-26 18:29:17 +01:00
|
|
|
|
2023-06-06 11:48:49 +01:00
|
|
|
## [Loss](../../Deep%20Learning.md#Loss%20Function)
|
2023-05-26 18:29:17 +01:00
|
|
|
$$D(S,L)=-\sum_iL_ilog(S_i)$$
|
|
|
|
- $S$
|
|
|
|
- $(0.1, 0.9)^T$
|
|
|
|
- Score generated by discriminator
|
|
|
|
- $L$
|
|
|
|
- $(1, 0)^T$
|
|
|
|
- One-hot label vector
|
|
|
|
- Step 1
|
|
|
|
- Depends on choice of real/fake
|
|
|
|
- Step 2
|
|
|
|
- One-hot fake vector
|
|
|
|
- $\sum_i$
|
|
|
|
- Sum over all images in mini-batch
|
|
|
|
|
|
|
|
| Noise | Image |
|
|
|
|
| ----- | ----- |
|
|
|
|
| $z$ | $x$ |
|
|
|
|
|
|
|
|
- Generator wants
|
|
|
|
- $D(G(z))=1$
|
|
|
|
- Wants to fool discriminator
|
|
|
|
- Discriminator wants
|
|
|
|
- $D(G(z))=0$
|
|
|
|
- Wants to correctly catch generator
|
|
|
|
- Real data wants
|
|
|
|
- $D(x)=1$
|
|
|
|
|
|
|
|
$$J^{(D)}=-\frac 1 2 \mathbb E_{x\sim p_{data}}\log D(x)-\frac 1 2 \mathbb E_z\log (1-D(G(z)))$$
|
|
|
|
$$J^{(G)}=-J^{(D)}$$
|
|
|
|
- First term for real images
|
|
|
|
- Second term for fake images
|
|
|
|
|
|
|
|
# Mode Collapse
|
|
|
|
- Generator gives easy solution
|
|
|
|
- Learns one image for most noise that will fool discriminator
|
|
|
|
- Mitigate by minibatch discriminator
|
|
|
|
- Match G(z) distribution to x
|
|
|
|
|
|
|
|
# What is Learnt?
|
|
|
|
- Encoding texture/patch detail from training set
|
2023-05-31 22:51:45 +01:00
|
|
|
- Similar to [FCN](../FCN/FCN.md)
|
2023-05-26 18:29:17 +01:00
|
|
|
- Reproducing texture at high level
|
|
|
|
- Cues triggered by code vector
|
|
|
|
- Input random noise
|
|
|
|
- Iteratively improves visual feasibility
|
2023-05-31 22:51:45 +01:00
|
|
|
- Different to [FCN](../FCN/FCN.md)
|
2023-12-27 21:56:22 +00:00
|
|
|
- Discriminator is a task specific classifier #classification
|
2023-05-26 18:29:17 +01:00
|
|
|
- Difficult to train over diverse footage
|
|
|
|
- Mixing concepts doesn't work
|
|
|
|
- Single category/class
|