xiaoyan yu
2013-May-20 18:10 UTC
[R] Neural network: Amore adaptative vs batch why the results are so different?
I am using the iris example came with nnet package to test AMORE. I can see
the outcomes are similar to nnet with adaptative gradient descent. However,
when I changed the method in the newff to the batch gradient descent, even
by setting the epoch numbers very large, I still found all the iris
expected class=2 being classified as class=3. In addition, all those
records in the outcomes (y) are the three digits, 0, 0.4677313, and
0.5111955. The script is as below. Please help to understand this behavior.
library('AMORE')
ir <- rbind(iris3[,,1], iris3[,,2], iris3[,,3])
targets <- matrix(c(rep(c(1,0,0),50), rep(c(0,1,0),50), rep(c(0,0,1),50)),
150, 3, byrow=TRUE)
samp <- c(sample(1:50,25), sample(51:100,25), sample(101:150,25))
net <- newff(n.neurons=c(4, 2, 3), # number of units per layer
learning.rate.global=1e-2, # learning rate at which
every neuron is trained
momentum.global=5e-4, # momentum for every
neuron
error.criterium="LMS", # error criterium:
least
mean squares
hidden.layer="sigmoid", # activation
function
of the hidden layer neurons
output.layer="sigmoid", # activation
function of
the output layer neurons
method="BATCHgdwm") # training method:
adaptative or batch
nnfit <- train(net, # network to train
ir[samp,], # input training samples
targets[samp,], # output training samples
error.criterium="LMS", # error criterium
report=TRUE, # provide information
during training
n.show=10, # number of times to
report
show.step=40000)
y<-sim(nnfit$net,ir[samp,])
Thanks,
Xiaoyan
[[alternative HTML version deleted]]
