Beware. Trusting Matlab for incremental epochs training.

17Oct10

As we found a serious problem in Matlab implementation of target normalization, we were interested in finding other cases in which the Matlab problem is obvious.

Note this one from Harbin Brightman:
http://www.mathworks.com/matlabcentral/newsreader/view_thread/292985


I built a neural network and set net.trainParam.epochs=10;

I built another neural network, and set net.trainParam.epochs=1;But I call this neural network 10 times,

These initialized weights of the two neural networks are the same
I hope this two neural network of forecasting results are the same,
But the program running result is different. I want to know why?

%%%%%%%%%%%%%%%%%%%%%%%%%%%%
clear
TrainNumber=10;
x=[-2:0.01:2];
y=(exp(-1.9.*(x+0.5))).*sin(10*x);

% set net.trainParam.epochs=10
net=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
W1=net.iw{1,1};
W2=net.lw{2,1};
B1=net.b{1,1};
B2=net.b{2,1};
net.trainParam.epochs=TrainNumber;
net.trainParam.goal=0.001;
net=train(net,x,y);
SimY=sim(net,x);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%net.trainParam.epochs=1, but call 10 times
for Count=1:TrainNumber

net1=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');

net1.iw{1,1}=W1;
net1.lw{2,1}=W2;
net1.b{1,1}=B1;
net1.b{2,1}=B2;

net1.trainParam.epochs=1;
net1.trainParam.goal=0.001;

net1=train(net1,x,y);

W1=net1.iw{1,1};
W2=net1.lw{2,1};
B1=net1.b{1,1};
B2=net1.b{2,1};

end;

SimY1=sim(net1,x);

figure(1)
hold on
plot(y);
plot(SimY,'r');
plot(SimY1,'y');
hold off

% the forecasting results are not the same, I want know why?

I consider Greg Heath an expert. His answer was this:

> Thanks for your advises, but I still don’t know why the two neural networks have different outputs, they have the same weights, after the same training times. According to the neuralworknets theory, they should have the same outputs, but it is different in matlab which make me confused.

I can find no coding errors.

I can duplicate your results.

There must be a bug in trainlm.

Thank you Matlab developers. It is scary.

Advertisements


One Response to “Beware. Trusting Matlab for incremental epochs training.”

  1. 1 Aitor Iraola Portillo

    I guess it might be a bit late, but…
    Trainlm uses a mu value to compute the update.
    That value isn’t carried over in incremental training (or when resuming from a checkpoint) unless you manualy do it.

    Changing:

    net1=train(net1,x,y);

    For:

    if Count > 1
    net1.trainParam.mu = tr.mu(end);
    end
    [net1,tr]=train(net1,x,y);

    Worked for me.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: