From 5f167f25a4a8f66fcd979c7ba1889433ed160a11 Mon Sep 17 00:00:00 2001 From: andy Date: Sun, 4 Jun 2023 22:30:39 +0100 Subject: [PATCH] vault backup: 2023-06-04 22:30:39 Affected files: .obsidian/app.json .obsidian/workspace-mobile.json .obsidian/workspace.json STEM/AI/Neural Networks/CNN/CNN.md STEM/AI/Neural Networks/CNN/GAN/cGAN.md STEM/AI/Neural Networks/MLP/Back-Propagation.md --- AI/Neural Networks/CNN/CNN.md | 4 ++-- AI/Neural Networks/CNN/GAN/cGAN.md | 2 +- AI/Neural Networks/MLP/Back-Propagation.md | 17 +++++++---------- 3 files changed, 10 insertions(+), 13 deletions(-) diff --git a/AI/Neural Networks/CNN/CNN.md b/AI/Neural Networks/CNN/CNN.md index 4fe602d..dbefb27 100644 --- a/AI/Neural Networks/CNN/CNN.md +++ b/AI/Neural Networks/CNN/CNN.md @@ -42,13 +42,13 @@ ![fine-tuning-freezing](../../../img/fine-tuning-freezing.png) # Training -- Validation & training [loss](../Deep%20Learning.md#Loss Function) +- Validation & training [loss](../Deep%20Learning.md#Loss%20Function) - Early - Under-fitting - Training not representative - Later - Overfitting -- V.[loss](../Deep%20Learning.md#Loss Function) can help adjust learning rate +- V.[loss](../Deep%20Learning.md#Loss%20Function) can help adjust learning rate - Or indicate when to stop training ![under-over-fitting](../../../img/under-over-fitting.png) \ No newline at end of file diff --git a/AI/Neural Networks/CNN/GAN/cGAN.md b/AI/Neural Networks/CNN/GAN/cGAN.md index bded0fe..54a1f85 100644 --- a/AI/Neural Networks/CNN/GAN/cGAN.md +++ b/AI/Neural Networks/CNN/GAN/cGAN.md @@ -1,6 +1,6 @@ Conditional [GAN](GAN.md) -- Hard to control with [AM](../Interpretation.md#Activation Maximisation) +- Hard to control with [AM](../Interpretation.md#Activation%20Maximisation) - Unconditional [GAN](GAN.md) - Condition synthesis on a class label - Concatenate unconditional code with conditioning vector diff --git a/AI/Neural Networks/MLP/Back-Propagation.md b/AI/Neural Networks/MLP/Back-Propagation.md index dafe717..442a775 100644 --- a/AI/Neural Networks/MLP/Back-Propagation.md +++ b/AI/Neural Networks/MLP/Back-Propagation.md @@ -22,16 +22,14 @@ $$\frac{\partial\mathfrak E(n)}{\partial w_{ji}(n)}= $$ #### From 4 -$$\frac{\partial\mathfrak E(n)}{\partial e_j(n)}= -e_j(n)$$ +$$\frac{\partial\mathfrak E(n)}{\partial e_j(n)}=e_j(n)$$ + #### From 1 $$\frac{\partial e_j(n)}{\partial y_j(n)}=-1$$ #### From 3 (note prime) -$$\frac{\partial y_j(n)}{\partial v_j(n)}= -\varphi_j'(v_j(n))$$ +$$\frac{\partial y_j(n)}{\partial v_j(n)}=\varphi_j'(v_j(n))$$ #### From 2 -$$\frac{\partial v_j(n)}{\partial w_{ji}(n)}= -y_i(n)$$ +$$\frac{\partial v_j(n)}{\partial w_{ji}(n)}=y_i(n)$$ ## Composite $$\frac{\partial\mathfrak E(n)}{\partial w_{ji}(n)}= @@ -40,10 +38,9 @@ $$\frac{\partial\mathfrak E(n)}{\partial w_{ji}(n)}= y_i(n) $$ -$$\Delta w_{ji}(n)= --\eta\frac{\partial\mathfrak E(n)}{\partial w_{ji}(n)}$$ -$$\Delta w_{ji}(n)= -\eta\delta_j(n)y_i(n)$$ +$$\Delta w_{ji}(n)=-\eta\frac{\partial\mathfrak E(n)}{\partial w_{ji}(n)}$$ +$$\Delta w_{ji}(n)=\eta\delta_j(n)y_i(n)$$ + ## Gradients #### Output Local $$\delta_j(n)=-\frac{\partial\mathfrak E (n)}{\partial v_j(n)}$$