machine learning - Slightly differing output from Pybrain neural network despite consistent initialisation? -
i working on feed forward network in pybrain. allow me compare effects of varying parameters have initialised network weights myself. have done under assumption if weights same output should same. assumption incorrect? below code used set network
n = feedforwardnetwork() inlayer = linearlayer(7, name="in") hiddenlayer = sigmoidlayer(1, name="hidden") outlayer = linearlayer(1, name="out") n.addinputmodule(inlayer) n.addmodule(hiddenlayer) n.addoutputmodule(outlayer) in_to_hidden = fullconnection(inlayer, hiddenlayer, name="in-to-hidden") hidden_to_out = fullconnection(hiddenlayer, outlayer, name="hidden-to-out") n.addconnection(in_to_hidden) n.addconnection(hidden_to_out) n.sortmodules() in_to_hidden_params = [ 0.27160018, -0.30659429, 0.13443352, 0.4509613, 0.2539234, -0.8756649, 1.25660715 ] hidden_to_out_params = [0.89784474] net_params = in_to_hidden_params + hidden_to_out_params n._setparameters(net_params) trainer = backproptrainer(n, ds, learningrate=0.01, momentum=0.8)
update
it looks seeding random number generator, reproducibility still issue. see the github issue here
i have done under assumption if weights same output should same
the assumption correct, code not doing so. training weights, not end being same. stochastic training methods permute training samples, , permutation leads different results, in particular backproptrainer does so:
def train(self): """train associated module 1 epoch.""" assert len(self.ds) > 0, "dataset cannot empty." self.module.resetderivatives() errors = 0 ponderation = 0. shuffledsequences = [] seq in self.ds._providesequences(): shuffledsequences.append(seq) shuffle(shuffledsequences)
if want repeatable results - seed random number generators.
Comments
Post a Comment