Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<h3>How a perceptron looks like</h3> <p>From the outside, a perceptron is a function that takes <code>n</code> arguments (i.e an <code>n</code>-dimensional vector) and produces <code>m</code> outputs (i.e. an <code>m</code>-dimensional vector).</p> <p>On the inside, a perceptron consists of layers of <em>neurons</em>, such that each neuron in a layer receives input from all neurons of the previous layer and uses that input to calculate a single output. The first layer consists of <code>n</code> neurons and it receives the input. The last layer consist of <code>m</code> neurons and holds the output after the perceptron has finished processing the input.</p> <h3>How the output is calculated from the input</h3> <p>Each connection from a neuron <code>i</code> to a neuron <code>j</code> has a <em>weight</em> <code>w(i,j)</code> (I'll explain later where they come from). The <code>total input</code> of a neuron <code>p</code> of the second layer is the sum of the weighted output of the neurons from the first layer. So</p> <pre><code>total_input(p) = Σ(output(k) * w(k,p)) </code></pre> <p>where <code>k</code> runs over all neurons of the first layer. The <em>activation</em> of a neuron is calculated from the total input of the neuron by applying an <em>activation function</em>. An often used activation function is the Fermi function, so</p> <pre><code>activation(p) = 1/(1-exp(-total_input(p))). </code></pre> <p>The output of a neuron is calculated from the activation of the neuron by applying an <code>output function</code>. An often used output function is the identity <code>f(x) = x</code> (and indeed some authors see the output function as part of the activation function). I will just assume that</p> <pre><code>output(p) = activation(p) </code></pre> <p>When the output off all neurons of the second layer is calculated, use that output to calculate the output of the third layer. Iterate until you reach the output layer.</p> <h3>Where the weights come from</h3> <p>At first the weights are chosen randomly. Then you select some examples (from which you know the desired output). Feed each example to the perceptron and calculate the <em>error</em>, i.e. how far off from the desired output is the actual output. Use that error to update the weights. One of the fastest algorithms for calculating the new weights is <a href="http://en.wikipedia.org/wiki/Rprop" rel="noreferrer">Resilient Propagation</a>.</p> <h3>How to construct a Perceptron</h3> <p>Some questions you need to address are</p> <ol> <li>What are the relevant characteristics of the documents and how can they be encoded into an <code>n</code>-dimansional vector?</li> <li>Which examples should be chosen to adjust the weights?</li> <li>How shall the output be interpreted to classify a document? Example: A single output that yields the most likely class versus a vector that assigns probabilities to each class.</li> <li>How many hidden layers are needed and how large should they be? I recommend starting with one hidden layer with <code>n</code> neurons.</li> </ol> <p>The first and second points are very critical to the quality of the classifier. The perceptron might classify the examples correctly but fail on new documents. You will probably have to experiment. To determine the quality of the classifier, choose two sets of examples; one for training, one for validation. Unfortunately I cannot give you more detailed hints to answering these questions due to lack of practical experience.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload