2023-05-26 18:52:08 +01:00
|
|
|
# Activation Maximisation
|
|
|
|
- Synthesise an ideal image for a class
|
|
|
|
- Maximise 1-hot output
|
2023-06-06 11:48:49 +01:00
|
|
|
- Maximise [SoftMax](../Activation%20Functions.md#SoftMax)
|
2023-05-26 18:52:08 +01:00
|
|
|
|
2023-06-05 17:01:29 +01:00
|
|
|
![am](../../../img/am.png)
|
2023-05-26 18:52:08 +01:00
|
|
|
- **Use trained network**
|
|
|
|
- Don't update weights
|
2023-06-06 11:48:49 +01:00
|
|
|
- [Feedforward](../Architectures.md) noise
|
|
|
|
- [Back-propagate](../MLP/Back-Propagation.md) [loss](../Deep%20Learning.md#Loss%20Function)
|
2023-05-26 18:52:08 +01:00
|
|
|
- Don't update weights
|
|
|
|
- Update image
|
|
|
|
|
2023-06-05 17:01:29 +01:00
|
|
|
![am-process](../../../img/am-process.png)
|
2023-06-08 17:52:09 +01:00
|
|
|
## R3egulariser
|
2023-05-26 18:52:08 +01:00
|
|
|
- Fit to natural image statistics
|
|
|
|
- Prone to high frequency noise
|
|
|
|
- Minimise
|
|
|
|
- Total variation
|
2023-06-06 11:48:49 +01:00
|
|
|
- $x^*$ is the best solution to minimise [loss](../Deep%20Learning.md#Loss%20Function)
|
2023-05-27 22:17:56 +01:00
|
|
|
|
|
|
|
$$x^*=\text{argmin}_{x\in \mathbb R^{H\times W\times C}}\mathcal l(\phi(x),\phi_0)$$
|
|
|
|
- Won't work
|
|
|
|
$$x^*=\text{argmin}_{x\in \mathbb R^{H\times W\times C}}\mathcal l(\phi(x),\phi_0)+\lambda\mathcal R(x)$$
|
|
|
|
- Need a regulariser like above
|
|
|
|
|
2023-06-05 17:01:29 +01:00
|
|
|
![am-regulariser](../../../img/am-regulariser.png)
|
2023-05-27 22:17:56 +01:00
|
|
|
|
|
|
|
$$\mathcal R_{V^\beta}(f)=\int_\Omega\left(\left(\frac{\partial f}{\partial u}(u,v)\right)^2+\left(\frac{\partial f}{\partial v}(u,v)\right)^2\right)^{\frac \beta 2}du\space dv$$
|
|
|
|
|
|
|
|
$$\mathcal R_{V^\beta}(x)=\sum_{i,j}\left(\left(x_{i,j+1}-x_{ij}\right)^2+\left(x_{i+1,j}-x_{ij}\right)^2\right)^{\frac \beta 2}$$
|
|
|
|
- Beta
|
|
|
|
- Degree of smoothing
|