Note that there are some explanatory texts on larger screens.

plurals
  1. POPart 2 Resilient backpropagation neural network
    primarykey
    data
    text
    <p>This is a follow-on question to <a href="https://stackoverflow.com/questions/2865057/resilient-backpropagation-neural-network-question-about-gradient">this post</a>. For a given neuron, I'm unclear as to how to take a partial derivative of its error and the partial derivative of it's weight.</p> <p>Working from this <a href="http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html" rel="nofollow noreferrer">web page</a>, it's clear how the propogation works (although I'm dealing with Resilient Propagation). For a Feedforward Neural Network, we have to 1) while moving forwards through the neural net, trigger neurons, 2) from the output layer neurons, calculate a total error. Then 3) moving backwards, propogate that error by each weight in a neuron, then 4) coming forwards again, update the weights in each neuron.</p> <p>Precisely though, these are the things I don't understand. </p> <p><strong>A)</strong> For each neuron, how do you calculate the partial derivative (<a href="http://en.wikipedia.org/wiki/Partial_derivative" rel="nofollow noreferrer">definition</a>) of the error over the partial derivative of the weight? My confusion is that, in calculus, a partial derivative is computed in terms of an n variable function. I'm sort of understanding <a href="https://stackoverflow.com/users/177931/ldog">ldog</a> and <a href="https://stackoverflow.com/users/92743/bayer">Bayer's</a> answers in <a href="https://stackoverflow.com/questions/2190732/understanding-the-neural-network-backpropagation">this post</a>. And I even understnad the chain rule. But it doesn't gel when I think, precisely, of how to apply it to the results of a i) linear combiner and ii) sigmoid activation function.</p> <p><strong>B)</strong> Using the Resilient propogation approach, how would you change the bias in a given neuron ? Or is there no bias or threshold in a NN using Resilient Propagation training?</p> <p><strong>C)</strong> How do you propagate a total error if there are two or more output neurons ? Does the total-error * neuron weight happen for each output neuron value? </p> <p>Thanks </p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload