similar to: logistic transformation using nlminb

Displaying 20 results from an estimated 700 matches similar to: "logistic transformation using nlminb"

2024 Aug 09
2
If loop
OK. The fact it's in a function is making things clearer. Are you trying to update the values of an object from within the function, and have them available outside the function. I don't speak functional programming articulately enough but basically v <- 1 funA <- function() { v <- v+1 } funA() cat (v) # 1 You either return the v from the function so funB <- function() {
2024 Aug 09
3
If loop
"Or use <<- assignment I think. (I usually return, but return can only return one object and I think you want two or more" You can return any number of objects by putting them in a list and returning the list. Use of "<<-" is rarely a good idea in R. -- Bert On Fri, Aug 9, 2024 at 1:53?AM CALUM POLWART <polc1410 at gmail.com> wrote: > > OK. The fact
2024 Aug 09
1
If loop
Thanks. Hmm. The loop is doing what it is supposed to do. > try1<-function(joint12=FALSE,marg1=FALSE,marg2=FALSE, +??????????????? cond12=FALSE,cond21=FALSE){ + # *************************************************** + # Testing if loop + # *************************************************** + if(joint12){ +?? {print ("joint12"); cat(joint12,"\n")} +?? {print
2015 Jul 03
1
Are downstream dependencies rebuilt when a package is updated on CRAN?
I was wondering: are the downstream dependencies of a package rebuilt when a package is updated on CRAN? (I'm referring to the binary packages, of course.) The reason I ask is because there are cases where this can cause problems. Suppose that when pkgB is built, it calls pkgA::makeClosure(), which returns a closure that refers to a function in pkgA. Suppose this code is in pkgA 1.0:
2011 Sep 06
2
Generalizing call to function
Hello guys, I would like to ask for help to understand what is going on in "func2". My plan is to generalize "func1", so that are expected same results in "func2" as in "func1". Executing "func1" returns... 0.25 with absolute error < 8.4e-05 But for "func2" I get... Error in dpois(1, 0.1, 23.3065168689948, 0.000429064542600244,
2004 Aug 03
1
nlminb vs optim
Dear R-help group, I have to maximize a likelihood with 40 parameters and I want to compare the MLE given by "nlminb" (Splus2000, on Windows) with those given by "optim" (R, on Unix). 1) On Splus, The algorithm "nlminb" seems to converge (the parameters stabilize) , it stops after several iterations ( around 400) with the message :"FUNCTION EVALUATION LIMIT
2012 Jul 04
2
About nlminb function
Hello I want to use the nlminb function but I have the objective function like characters. I can summarize the problem using the first example in the nlminb documentation. x <- rnbinom(100, mu = 10, size = 10) hdev <- function(par) -sum(dnbinom(x, mu = par[1], size = par[2], log = TRUE)) nlminb(c(9, 12), objective=hdev) With the last instructions we obtain appropriate results. If I have
2006 Jul 23
1
How to pass eval.max from lme() to nlminb?
Dear R community, I'm fitting a complex mixed-effects model that requires numerous iterations and function evaluations. I note that nlminb accepts a list of control parameters, including eval.max. Is there a way to change the default eval.max value for nlminb when it is being called from lme? Thanks for any thoughts, Andrew -- Andrew Robinson Department of Mathematics and Statistics
2010 Dec 07
1
Using nlminb for maximum likelihood estimation
I'm trying to estimate the parameters for GARCH(1,1) process. Here's my code: loglikelihood <-function(theta) { h=((r[1]-theta[1])^2) p=0 for (t in 2:length(r)) { h=c(h,theta[2]+theta[3]*((r[t-1]-theta[1])^2)+theta[4]*h[t-1]) p=c(p,dnorm(r[t],theta[1],sqrt(h[t]),log=TRUE)) } -sum(p) } Then I use nlminb to minimize the function loglikelihood: nlminb(
2007 Dec 21
1
NaN as a parameter in NLMINB optimization
I am trying to optimize a likelihood function using NLMINB. After running without a problem for quite a few iterations (enough that my intermediate output extends further than I can scroll back), it tries a vector of parameter values NaN. This has happened with multiple Monte Carlo datasets, and a few different (but very similar) likelihood functions. (They are complicated, but I can send them
2010 Jul 10
1
Not nice behaviour of nlminb (windows 32 bit, version, 2.11.1)
I won't add to the quite long discussion about the vagaries of nlminb, but will note that over a long period of software work in this optimization area I've found a number of programs and packages that do strange things when the objective is a function of a single parameter. Some methods quite explicitly throw an error when n<2. It seems nlminb does not, but that does not mean that
2010 Mar 24
1
vcov.nlminb
Hello all, I am trying to get the variance-covariance (VCOV) matrix of the parameter estimates produced from the nlminb minimizing function, using vcov.nlminb, but it seems to have been expunged from the MASS library. The hessian from nlminb is also producing NaNs, although the estimates seems to be right, so I can't VCOV that way either. I also tried using the vcov function after minimizing
2012 Nov 22
1
Optimizing nested function with nlminb()
I am trying to optimize custom likelyhood with nlminb() Arguments h and f are meant to be fixed. example.R: compute.hyper.log.likelyhood <- function(a, h, f) { a1 <- a[1] a2 <- a[2] l <- 0.0 for (j in 1:length(f)) { l <- l + lbeta(a1 + f[j], a2 + h - f[j]) - lbeta(a1, a2) } return(l) } compute.optimal.hyper.params <- function(start, limits, h_, f_) { result
2008 Jun 11
1
difference between nlm and nlminb
Hi, I was wondering if someone could give a brief, big picture overview of the difference between the two optimization functions nlm and nlminb. I'm not familiar with PORT routines, so I was hoping someone could give an explanation. Thanks, Angelo _________________________________________________________________ Instantly invite friends from Facebook and other social networks to join yo
2012 Nov 04
1
Struggeling with nlminb...
Hallo together, I am trying to estimate parameters by means of QMLE using the nlminb optimizer for a tree-structured GARCH model. I face two problems. First, the optimizer returns error[8] false convergence if I estimate the functions below. I have estimated the model at first with nlm without any problems, but then I needed to add some constraints so i choose nlminb.
2012 Oct 10
1
"optim" and "nlminb"
#optim package estimate<-optim(init.par,Linn,hessian=TRUE, method=c("L-BFGS-B"),control = list(trace=1,abstol=0.001),lower=c(0,0,0,0,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf,-Inf),upper=c(1,1,1,1,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf,Inf)) #nlminb package estimate<-nlminb(init.par,Linn,gr=NULL,hessian=TRUE,control =
2008 Apr 08
1
question about nlminb
Dear All, I wanted to post some more details about the query I sent to s-news last week. I have a vector with a constraint. The constraint is that the sum of the vector must add up to 1 - but not necessarily positive, i.e. x[n] <- 1 -(x[1] + ...+x[n-1]) I perform the optimisation on the vector x such that x <- c(x, 1-sum(x)) In other words, fn <- function(x){ x <- c(x, 1 -
2008 Jul 25
0
nlminb--lower bound for parameters are dependent on each others
Hello I'm trying to solve two sets of equations (each set has four equations and all of them share common parameters) with nlminb procedure. I minimize one set and use their parameters as initial values of other set, repeating this until their parameters become very close to each other. I have several parameters (say,param1, param2) and their constraints are given as inequality and depend
2009 May 30
1
A problem about "nlminb"
Hello everyone! When I use "nlminb" to minimize a function with a variable of almost 200,000 dimension, I got the following error. > nlminb(start=start0, msLE2, control = list(x.tol = .001)) Error in vector("double", length) : vector size specified is too large I had the following setting options(expressions=60000) options(object.size=10^15) I have no idea about what
2010 Dec 15
1
Help about nlminb function
Hi Everyone, Can anyone help me resolve a problem that i'm having with nlminb. The problem is that it stops after just one iteration and returns the same values as "start" ones. Thank you very much for your help. Sincerely. -- Kamel Gaanoun (+33) (0)6.76.04.65.77 [[alternative HTML version deleted]]