similar to: Appropriate method for sharing data across functions

Displaying 20 results from an estimated 700 matches similar to: "Appropriate method for sharing data across functions"

2012 Mar 25
2
avoiding for loops
I have data that looks like this: > df1 group id 1 red A 2 red B 3 red C 4 blue D 5 blue E 6 blue F I want a list of the groups containing vectors with the ids. I am avoiding subset(), as it is only recommended for interactive use. Here's what I have so far: df1 <- data.frame(group=c("red", "red", "red", "blue",
2003 May 08
3
Avoiding loops to spare time and memory
Is it possible to avoid the loop in the following function (or make the function otherwise more efficient) and can someone point me to a possible solution? (It would be great if hours could be reduced to seconds :-). # --------------------------------------------- RanEigen=function(items=x,cases=y,sample=z) { X=matrix(rnorm(cases*items),nrow=cases,byrow=F) S=crossprod(X-rep(1,cases) %*%
2003 Jun 03
3
Update VR_7.1-6
The update of VR by downloading VR_7.1-6.zip and using install.packages (from local zip files) fails with the following error message: Error in file(file, "r") : unable to open connection In addition: Warning message: cannot open file `VR/DESCRIPTION' Other packages can be installed without problems, except of dse_2003.4-1 with a similar error message. Why? Operating System:
2012 May 07
1
Repeating
Dear All, I have a codes which calculates the result of Ripley's K function of my data. I want to repeat this process 999 times. However, i am getting an error when i use the "for i in" function. Is there any way to repeat this analysis 999 times. Here are the codes i used ; data4 <- matrix(c(sample(id),data1),203,3) a <- data4[,1] random.case=data4[a==0,]
2003 May 21
1
help on spatial data
Hi, I have a dataset with x and y coordinates and in each point I have an identity of point, in some cases I can have more then one identity by point. My dataset is something like this: > x <- rep(c(1:4),4) > y <- rep(c(1:4),c(4,4,4,4)) > area1 <- sample(factor(rep(c("a","b","c","d"),4))) > area2 <-
2003 Jul 16
2
numerical differentiation in R? (for optim "SANN" parscale)
Dear R users, I am running a maximum likelihood model with optim. I chose the simulated annealing method (method="SANN"). SANN is not performing bad, but I guess it would be much more effecive if I could set the `parscale' parameter. The help sais: `parscale' A vector of scaling values for the parameters. Optimization is performed on `par/parscale' and these
2005 Apr 26
2
"wild" function example in optim
Dear all, Firstly, I do apologize if my question is simple and posted in the wrong place but I had no reply from the R-help mailing list (maybe it is too simple!). I was wondering why parscale is set to 20 in the "wild" function example used in ?optim. This function has only one parameter and if we set parscale equal to 1 then the solution near the global minimum is not found. I
2005 Apr 19
1
Optim(...parscale...)
Hi there, The optim(par, fn, ...parscale...) function in R requires 'parscale' which is defined as: "A vector of scaling values for the parameters. Optimisation is performed on 'par/parscale' and these should be comparable in the sense that a unit change in any element (??) produces a unit change in the scaled value". I am just not understanding the
2008 Mar 23
2
scaling problems in "optim"
Dear R users, I am trying to figure out the control parameter in "optim," especially, "fnscale" and "parscale." In the R docu., ------------------------------------------------------ fnscale An overall scaling to be applied to the value of fn and gr during optimization. If negative, turns the problem into a maximization problem. Optimization is performed on
2008 Jul 21
1
Control parameter of the optim( ): parscale
Hi everybody, I am using the L-BFGS-B method of the mle2() function to estimate the values of 6 parameters. mle2 uses the methods implemented in optim. As I got it from the descriptions available online, one can use the parscale parameter to tell R somehow what the values of the estimated parameters should be . . . Could somebody please help me understand what one has to do actually with the
2003 Jul 18
3
question about formulating a nls optimization
Dear list, I'm migrating a project from Matlab to R, and I'm facing a relatively complicated problem for nls. My objective function is below: >> objFun <- function(yEx,xEx,tEx,gamma,theta,kappa){ yTh <- pdfDY(xEx,tEx,gamma,theta,kappa) sum(log(yEx/yTh)^2) } The equation is yTh=P(xEx,tEx) + noise. I collect my data in: >> data <-
2008 Aug 20
3
bug in lme4?
Dear all, I found a problem with 'lme4'. Basically, once you load the package 'aod' (Analysis of Overdispersed Data), the functions 'lmer' and 'glmer' don't work anymore: library(lme4) (fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)) (gm1 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), family = binomial, data
2012 Aug 18
1
Parameter scaling problems with optim and Nelder-Mead method (bug?)
Dear all, I?m having some problems getting optim with method="Nelder-Mead" to work properly. It seems like there is no way of controlling the step size, and the step size seems to depend on the *difference* between the initial values, which makes no sense. Example: f=function(xy, mu1, mu2) { print(xy) dnorm(xy[1]-mu1)*dnorm(xy[2]-mu2) } f1=function(xy) -f(xy, 0,
2011 May 25
1
L-BFGS-B and parscale in optim()
Hi, When using method L-BFGS-B along with a parscale argument, should the lower and upper bounds provided be on the scaled or unscaled values? Thanks. Cheers, -- Seb
2011 Aug 14
2
Scaling problem in optim()
I am using the function optim and I get the error message ABNORMAL_TERMINATION_IN_LNSRCH. Reason for this could be a scaling problem. Thus, I used parscale in order to scale the parameters. But I still have the error message. For example, with parscale=c(rep(1,n), 0.01,1,0.01): return(optim(c(mu1,b,k,phi), neg2loglikelihood, method = "L-BFGS-B",
2004 Oct 31
2
Obtaining fitted model information
Dear list, I am brand new to R and using Dalgaard's (2002) book Introductory Statistics with R (thus, some of my terminology may be incorrect). I am fitting regression models and I want to use Hurvich and Tsai's AICC statistic to examine my regression models. This penalty can be expressed as: 2*npar * (n/(n-npar-1)). While you can obtain AIC, BIC, and logLik, I want to impose the AICC
2008 Jul 05
3
Editing the "..." argument
Dear all, I'd like tweaking the ... arguments that one user can pass in my function for fitting a model. More precisely, my objective function is (really) problematic to optimize using the "optim" function. Consequently, I'd like to add in the "control" argument of the latter function a "ndeps = rep(something, #par)" and/or "parscale =
2003 May 20
3
a quick Q about memory limit in R
Hello, there, I got this error when i tried to run " data.kr <- surf.gls(2, expcov, data, d=0.7);" "Error: cannot allocate vector of size 382890 Kb Execution halted" My data is 100x100 grid. the following is the summary of "data": > summary(data); x y z Min. : 1.00 Min. : 1.00 Min. :-1.0172 1st Qu.: 26.00
2010 Sep 15
1
optim with BFGS--what may lead to this, a strange thing happened
Dear R Users on a self-written function for calculating maximum likelihood probability (plz check function code at the bottom of this message), one value, wden, suddenly jump to zero. detail info as following: w[11]=2.14 lnw =2.37 2.90 3.76 ... regw =1.96 1.77 1.82 .... wden=0.182 0.178 0.179... w[11]=2.14 lnw=2.37 2.90 3.76 ... regw =1.96 1.77 1.82 .... wden=0.182
2003 Sep 21
1
Zero inflated count models
Can someone show me how to specify zero inflated poisson and zero inflated negative poisson models in R? I would like to replicate the example given in Long (1997: Regression Models for Categorical and Limited Dependent Variables) in Chapter 8.5 (pp. 242-247). TIA Dirk ************************************************* Dr. Dirk Enzmann Criminological Research Institute of Lower Saxony