Note that there are some explanatory texts on larger screens.

plurals
  1. POpybrain poor results
    text
    copied!<p>I am wondering if I am doing something wrong or if results are really that poor. Lets assume the simplest NN examples as shown in documentation:</p> <pre><code>&gt;&gt;&gt;net = buildNetwork(2, 3, 1, bias=True) &gt;&gt;&gt; ds = SupervisedDataSet(2, 1) &gt;&gt;&gt; ds.addSample((0, 0), (0,)) &gt;&gt;&gt; ds.addSample((0, 1), (1,)) &gt;&gt;&gt; ds.addSample((1, 0), (1,)) &gt;&gt;&gt; ds.addSample((1, 1), (0,)) &gt;&gt;&gt; trainer = BackpropTrainer(net, ds) &gt;&gt;&gt; trainer.trainUntilConvergence() &gt;&gt;&gt; print net.activate((0,0)) &gt;&gt;&gt; print net.activate((0, 1)) &gt;&gt;&gt; print net.activate((1, 0)) &gt;&gt;&gt; print net.activate((1, 1)) </code></pre> <p>e.g</p> <pre><code>&gt;&gt;&gt; print net.activate((1,0)) [ 0.37855891] &gt;&gt;&gt; print net.activate((1,1)) [ 0.6592548] </code></pre> <p>Expected was 0. I know I can round obviously BUT still I would expect the network to be lot more precise for such a simple example. It can be called "working" here BUT I suspect I am missing something important cause this is VERY unusable... </p> <p>The thing is that if you set <code>verbose=True</code> to your trainer you can see pretty small errors (like Total error: 0.0532936260399)</p> <p>I would assume the error of the network is 5%, then how can it be SO MUCH off in activate function after that?</p> <p>I use pybrain for a lot more complex thing obviously, but I have the same problem. I get roughly 50% of my test samples wrong even though the network says error is like 0.09 or so. </p> <p>Any help pls? </p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload