vault backup: 2023-06-04 22:30:39

Affected files:
.obsidian/app.json
.obsidian/workspace-mobile.json
.obsidian/workspace.json
STEM/AI/Neural Networks/CNN/CNN.md
STEM/AI/Neural Networks/CNN/GAN/cGAN.md
STEM/AI/Neural Networks/MLP/Back-Propagation.md
This commit is contained in:
andy 2023-06-04 22:30:39 +01:00
parent 1c441487f9
commit 5f167f25a4
3 changed files with 10 additions and 13 deletions

View File

@ -42,13 +42,13 @@
![fine-tuning-freezing](../../../img/fine-tuning-freezing.png)
# Training
- Validation & training [loss](../Deep%20Learning.md#Loss Function)
- Validation & training [loss](../Deep%20Learning.md#Loss%20Function)
- Early
- Under-fitting
- Training not representative
- Later
- Overfitting
- V.[loss](../Deep%20Learning.md#Loss Function) can help adjust learning rate
- V.[loss](../Deep%20Learning.md#Loss%20Function) can help adjust learning rate
- Or indicate when to stop training
![under-over-fitting](../../../img/under-over-fitting.png)

View File

@ -1,6 +1,6 @@
Conditional [GAN](GAN.md)
- Hard to control with [AM](../Interpretation.md#Activation Maximisation)
- Hard to control with [AM](../Interpretation.md#Activation%20Maximisation)
- Unconditional [GAN](GAN.md)
- Condition synthesis on a class label
- Concatenate unconditional code with conditioning vector

View File

@ -22,16 +22,14 @@ $$\frac{\partial\mathfrak E(n)}{\partial w_{ji}(n)}=
$$
#### From 4
$$\frac{\partial\mathfrak E(n)}{\partial e_j(n)}=
e_j(n)$$
$$\frac{\partial\mathfrak E(n)}{\partial e_j(n)}=e_j(n)$$
#### From 1
$$\frac{\partial e_j(n)}{\partial y_j(n)}=-1$$
#### From 3 (note prime)
$$\frac{\partial y_j(n)}{\partial v_j(n)}=
\varphi_j'(v_j(n))$$
$$\frac{\partial y_j(n)}{\partial v_j(n)}=\varphi_j'(v_j(n))$$
#### From 2
$$\frac{\partial v_j(n)}{\partial w_{ji}(n)}=
y_i(n)$$
$$\frac{\partial v_j(n)}{\partial w_{ji}(n)}=y_i(n)$$
## Composite
$$\frac{\partial\mathfrak E(n)}{\partial w_{ji}(n)}=
@ -40,10 +38,9 @@ $$\frac{\partial\mathfrak E(n)}{\partial w_{ji}(n)}=
y_i(n)
$$
$$\Delta w_{ji}(n)=
-\eta\frac{\partial\mathfrak E(n)}{\partial w_{ji}(n)}$$
$$\Delta w_{ji}(n)=
\eta\delta_j(n)y_i(n)$$
$$\Delta w_{ji}(n)=-\eta\frac{\partial\mathfrak E(n)}{\partial w_{ji}(n)}$$
$$\Delta w_{ji}(n)=\eta\delta_j(n)y_i(n)$$
## Gradients
#### Output Local
$$\delta_j(n)=-\frac{\partial\mathfrak E (n)}{\partial v_j(n)}$$