similar to: optim & .C / Crashing on run

Displaying 20 results from an estimated 7000 matches similar to: "optim & .C / Crashing on run"

2023 Aug 13
4
Noisy objective functions
While working on 'random walk' applications, I got interested in optimizing noisy objective functions. As an (artificial) example, the following is the Rosenbrock function, where Gaussian noise of standard deviation `sd = 0.01` is added to the function value. fn <- function(x) (1+rnorm(1, sd=0.01)) * adagio::fnRosenbrock(x) To smooth out the noise, define another
2011 Nov 10
3
optim seems to be finding a local minimum
Hello! I am trying to create an R optimization routine for a task that's currently being done using Excel (lots of tables, formulas, and Solver). However, otpim seems to be finding a local minimum. Example data, functions, and comparison with the solution found in Excel are below. I am not experienced in optimizations so thanks a lot for your advice! Dimitri ### 2 Inputs:
2012 May 17
3
nls and if statements
Hi All, I have a situation where I want an 'if' variable to be parameterized. It's entirely possible that the way I'm trying to do this is wrong, especially given the error message I get that indicates I can't do this using an 'if' statement. Essentially, I have data where I think a relationship enters when a variable (here Pwd) is below some value (z). I don't
2009 Oct 10
2
Nelder-Mead with output of simplex vertices
Greetings! I want to follow the evolution of a Nelder-Mead function minimisation (a function of 2 variables). Hence each simplex will have 3 vertices. Therefore I would like to have a function which can output the coordinates of the 3 vertices after each new simplex is generated. However, there seems to be no way (which I can detect) of extracting this information from optim() (the
2005 Nov 15
1
An optim() mystery.
I have a Master's student working on a project which involves estimating parameters of a certain model via maximum likelihood, with the maximization being done via optim(). A phenomenon has occurred which I am at a loss to explain. If we use certain pairs of starting values for optim(), it simply returns those values as the ``optimal'' values, although they are definitely not
2010 Sep 04
3
How can I fixe convergence=1 in optim
Hi R users, I am using the optim funciton to maximize a log likelihood function. My code is as follows: p<-optim(c(-0.2392925,0.4653128,-0.8332286, 0.0657, -0.0031, -0.00245, 3.366, 0.5885, -0.00008, 0.0786,-0.00292,-0.00081, 3.266, -0.3632, -0.000049, 0.1856, 0.00394, -0.00193, -0.889, 0.5379, -0.000063, 0.213, 0.00338, -0.00026, -0.8912, -0.3023, -0.000056), f,
2009 Nov 30
3
Question about output from optim
Dear R-users, I am trying to port to R something that I wrote in Matlab to perform model parameter optimization using the Nelder-Mead simplex method (fminsearch). I read the help on ?optim (which seems to be the way to go) as well as a bunch of posts on the topic, but I would like to make sure about something before I spend to much time trying to reproduce something that is not possible. The
2010 Feb 08
2
evolution of Nelder-Mead process
Dear list,   I am looking for an R-only implementation of a Nelder-Mead process that can find local maxima of a spatially distributed variable, e.g. height, on a spatial grid, and outputs the coordinates of the new point during each evaluation. I have found two previous threads about this topic, and was wondering if something similar has been implemented since those messages were posted.   Thank
2012 May 01
2
Define lower-upper bound for parameters in Optim using Nelder-Mead method
Dear UseRs, Is there a way to define the lower-upper bounds for parameters fitted by optim using the Nelder-Mead method ? Thanks, Arnaud [[alternative HTML version deleted]]
2007 Jan 03
1
optim
Hi! I'm trying to figure out how to use optim... I get some really strange results, so I guess I got something wrong. I defined the following function which should be minimized: errorFunction <- function(localShifts,globalShift,fileName,experimentalPI,lambda) { lambda <- 1/sqrt(147) # error <- abs(errHuber(localShifts,globalShift, #
2017 Aug 06
1
Help with optim function in R, please?
Hi all, Many thank in advance for helping me.? I tried to fit Expectation Maximization algorithm for mixture data. I must used one of numerical method to maximize my function. I built my code but I do not know how to make the optim function run over a different value of the parameters.? That is, For E-step I need to get the value of mixture weights based on the current (initial) values of
2012 Oct 23
1
help using optim function
Hi, am very new to R and I've written an optim function, but can't get it to work least.squares.fitter<-function(start.params,gr,low.constraints,high.constraints,model.one.stepper,data,scale,ploton=F) { result<-optim(par=start.params,method=c('Nelder-Mead'),fn=least.squares.fit,lower=low.constraints,upper=high.constraints,data=data,scale=scale,ploton=ploton)
2001 Oct 03
1
tiny typo in optim/N-M documentation (PR#1109)
In the optim() documentation, the control parameter "maxit" says "There is no other stopping criteria." That should be "are", or "criterion" ... While I'm at it -- poking at the code a little bit, it looks as if the initial simplex is set from the initial point by displacing each parameter value by max(0.1,0.1*pmax(fabs(Bvec))), where Bvec is the
2000 Nov 30
3
Optimisation methods
I don't want to re-invent the wheel, and I'm trying to code up something that does a Nelder-Mead simplex method to minimise a non-linear objective function. (I'm porting something I originally wrote in matlab, using the optimisation toolbox funciton fmins). Is there already something available to do this included in R? Do people have suggestions on the best way to do this? Thanks,
2020 Oct 28
2
R optim() function
Hi R-Help, I am using R to do functional outlier detection (using PCA to reduce to 2 dimensions - the functional boxplot methodology used in the Rainbow package), and using Hscv.diag function to calculate the bandwidth matrix where this line of code is run: result <- optim(diag(Hstart), scv.mat.temp, method = "Nelder-Mead", control = list(trace = as.numeric(verbose))) Within the
2010 Mar 05
2
Improved Nelder-Mead algorithm - a potential replacement for optim's Nelder-Mead
Hi, I have written an R translation of C.T. Kelley's Matlab version of the Nelder-Mead algorithm. This algorithm is discussed in detail in his book "Iterative methods for optimization" (SIAM 1999, Chapter 8). I have tested this relatively extensively on a number of smooth and non-smooth problems. It performs well, in general, and it almost always outperforms optim's
2006 Jun 12
1
r's optim vs. matlab's fminsearch
Hi, I'm having a problem converting a Matlab program into R. The R code works almost all the time, but about 4% of the time R's optim function gets stuck on a local minimum whereas matlab's fminsearch function does not (or at least fminsearch finds a better minimum than optim). My understanding is that both functions default to Nelder-Mead optimization, but what's different about
2017 Dec 31
1
Order of methods for optimx
Dear R-er, For a non-linear optimisation, I used optim() with BFGS method but it stopped regularly before to reach a true mimimum. It was not a problem with limit of iterations, just a local minimum. I was able sometimes to reach better minimum using several rounds of optim(). Then I moved to optimx() to do the different optim rounds automatically using "Nelder-Mead" and
2006 Aug 09
2
optim error
Dear all, There have been one or two questions posted to the list regarding the optim error "non-finite finite-difference value [4]." The error apparently means that the 4th element of the gradient is non-finite. My question is what part(s) of my program should I fiddle with in an attempt to fix it? Starting values? Something in the log-likelihood itself? Perhaps the data
2012 Aug 18
1
Parameter scaling problems with optim and Nelder-Mead method (bug?)
Dear all, I?m having some problems getting optim with method="Nelder-Mead" to work properly. It seems like there is no way of controlling the step size, and the step size seems to depend on the *difference* between the initial values, which makes no sense. Example: f=function(xy, mu1, mu2) { print(xy) dnorm(xy[1]-mu1)*dnorm(xy[2]-mu2) } f1=function(xy) -f(xy, 0,