similar to: Neural network: Amore adaptative vs batch why the results are so different?

Displaying 20 results from an estimated 600 matches similar to: "Neural network: Amore adaptative vs batch why the results are so different?"

2011 Feb 09
0
a question about AMORE (newff, sim), pls help
Hi All, I try to test the neural network package AMORE, I normalized my data first, the input data is X [x1,x2,x3] where x1,x2,x3 each is 100 row 1 column vector. the output data Y is 100 row 1 column vector. my network has neurons=c(3,2,2,1) which 2 hidden layers, 3 node in the input layer while 1 in the output layer. Once the network is trained. I use sim (result$net, z) to
2010 Jul 13
0
Neural Network package AMORE and a weight decay
Hi, I want to use the neural network package AMORE and I don't find in the documentation the weight decay option. Could someone tell if it is possible to add a regularization parameter (also known as a weight decay) to the training method. Is it possible to alter the gradient descent rule for that? Thanks, Ron
2004 Jan 09
3
ipred and lda
Dear all, can anybody help me with the program below? The function predict.lda seems to be defined but cannot be used by errortest. The R version is 1.7.1 Thanks in advance, Stefan ---------------- library("MASS"); library("ipred"); data(iris3); tr <- sample(1:50, 25); train <- rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3]); test <- rbind(iris3[-tr,,1],
2002 Mar 17
3
apply problem
> data(iris) # iris3 is first 3 rows of iris > iris3 <- iris[1:3,] # z compares row 1 to each row of iris3 and is correctly computed > z <- c(F,F,F) > for(i in seq(z)) z[i] <- identical(iris3[1,],iris3[i,]) > z [1] TRUE FALSE FALSE # this should do the same but is incorrect > apply(iris3,1,function(x)identical(x,iris3[1,])) 1 2 3 FALSE FALSE FALSE
2000 Mar 08
3
Reading data for discriminant analysis
Dear R users, I want to do discriminant analysis on my data. I have successfully followed the discriminant analysis in V & R on the iris data: > ir <- rbind (iris3[,,1],iris3[,,2],iris3[,,3]) > ir.species <- c(rep("s",50),rep("c",50),rep("v",50)) > a <- lda(log(ir),ir.species) > a$svd^2/sum(a$svd^2) [1] 0.996498601 0.003501399 > a.x <-
2004 Mar 29
1
Interpreting knn Results
Maybe you should show your colleague how to access help pages in R? Right in ?knn, it says: prob: If this is true, the proportion of the votes for the winning class are returned as attribute 'prob'. so 1.0 mean all three NNs are of the `winning'; i.e., predicted, class, and 0.66667 means 2 out of the 3 NNs are of the winning class, etc. Andy > From: Ko-Kang
2004 Nov 02
2
lda
Hi !! I am trying to analyze some of my data using linear discriminant analysis. I worked out the following example code in Venables and Ripley It does not seem to be happy with it. ============================ library(MASS) library(stats) data(iris3) ir<-rbind(iris3[,,1],iris3[,,2],iris3[,,3]) ir.species<-factor(c(rep("s",50),rep("c",50),rep("v",50)))
2009 Nov 02
1
modifying predict.nnet() to function with errorest()
Greetings, I am having trouble calculating artificial neural network misclassification errors using errorest() from the ipred package. I have had no problems estimating the values with randomForest() or svm(), but can't seem to get it to work with nnet(). I believe this is due to the output of the predict.nnet() function within cv.factor(). Below is a quick example of the problem I'm
2009 Jul 23
1
Activation Functions in Package Neural
Hi, I am trying to build a VERY basic neural network as a practice before hopefully increasing my scope. To do so, I have been using package "neural" and the MLP related functions (mlp and mlptrain) within that package. So far, I have created a basic network, but I have been unable to change the default activation function. If someone has a suggestion, please advise. The goal of the
2005 Jul 27
1
how to get actual value from predict in nnet?
Dear All, After followed the help of nnet, I could get the networks trained and, excitedly, get the prediction for other samples. It is a two classes data set, I used "N" and "P" to label the two. My question is, how do I get the predicted numerical value for each sample? Not just give me the label(either "N" or "P")? Thanks! FYI: The nnet example I
2009 May 30
0
what is 'class.ind' here?
Hi. The there is an example in nnet help which is pasted in below. I am not sure how they are generating 'targets'. What is the 'class.ind() function doing? In the help docs for it they say "Generates a class indicator function from a given factor." I tried putting a simple vector of the "classes" into test.cl (below) but I get an error of "(list) object
2009 Nov 17
1
Error running lda example: Session Info
> > library(MASS) > Iris <- data.frame(rbind(iris3[,,1], iris3[,,2], iris3[,,3]), + Sp = rep(c("s","c","v"), rep(50,3))) > train <- sample(1:150, 75) > table(Iris$Sp[train]) c s v 22 23 30 > z <- lda(Sp ~ ., Iris, prior = c(1,1,1)/3, subset = train) Error in if (targetlist[i] == stringname) { : argument is of length
2012 Jan 24
0
Problem training a neural network with "neuralnet" library
Hi, I am having difficulty in training a neural network using the package "neuralnet". My neural network has 2 input neurons (covariates), 1 hidden layer with 2 hidden neurons and 2 output neurons (responses). I am training my neural network with a dataset that has been transformed so that each column is of type "numeric". The difficulty I am facing is that the responses of
2006 Apr 13
5
Questions on formula in princomp
I hope this time I'm using the "iris" dataset correctly: ir <- rbind(iris3[,,1], iris3[,,2], iris3[,,3]) lir <- data.frame(log(ir)) names(lir) <- c("a","b","c","d") I'm trying to understand the meaning of expressions like "~ a+b+c+d", used with princomp, e.g. princomp(~ a+b+c+d, data=lir, cor=T) By inspection, it
2012 Jan 17
0
Logistical or Linear Output in AMORE
Is there any function in AMORE switching output into logistical or linear one, like linout=TRUE in nnet. Please give me some help, thanks. -- View this message in context: http://r.789695.n4.nabble.com/Logistical-or-Linear-Output-in-AMORE-tp4302187p4302187.html Sent from the R help mailing list archive at Nabble.com. [[alternative HTML version deleted]]
2006 Dec 04
0
Package AMORE
Installing the package AMORE, in the software R, where I will be able to obtain the folder of the source coding, in way the add code in the function deltaE. Nuno Vale [[alternative HTML version deleted]]
2004 Aug 01
1
Neural Net Validation Sub Set
Dear R users, I have been playing with the nnet and predict.nnet functions and have two questions. 1) Is it possible to specify a validation set as well as a training set in the nnet function before using predict.nnet to test the nnet object against new data? 2) Is it possible to specify more than one layer of neurons? Thanks in advance Matt Oliver
2008 Apr 26
1
Variables selection in Neural Networks
Hi folks, I want to apply a neural network to a data set to classify the observations in the different classes from a concrete response variable. The idea is to prove different models from network modifying the number of neurons of the hidden layer to control overfitting. But, to select the best model how I can choose the relevant variables? How I can eliminate those that are not significant for
2006 Mar 01
0
TC with bandwith adaptative for Wireless network
Hello, I would want interested in implemeting QoS for a wireless network, where we have variated services like: VOIP, ssh, ftp, www, mail, ftp ...etc Like we Know, this kind of network have high variation in bandwith of link. we want to introduce a form of adapting the parameter of qdisc that depends of bandwith, with this variances. In sumarize, an adaptative design with the bandwith of each
2007 Nov 11
0
Patch to sshd match
Please find attached a patch against openssh-4.7p1 It extends the Match in sshd_config. The point is that it is sometimes easier (and more secure) to match on NOT something. A criterium may be preceded by ! which inverts the condition, thus: Match !Group sysadmins ForceCommand /usr/bin/sftp forces use of sftp on any user who is not a system administrator. A !! has the