similar to: Improved Nelder-Mead algorithm - a potential replacement for optim's Nelder-Mead

Displaying 20 results from an estimated 2000 matches similar to: "Improved Nelder-Mead algorithm - a potential replacement for optim's Nelder-Mead"

2010 Feb 08
2
evolution of Nelder-Mead process
Dear list,   I am looking for an R-only implementation of a Nelder-Mead process that can find local maxima of a spatially distributed variable, e.g. height, on a spatial grid, and outputs the coordinates of the new point during each evaluation. I have found two previous threads about this topic, and was wondering if something similar has been implemented since those messages were posted.   Thank
2012 May 01
2
Define lower-upper bound for parameters in Optim using Nelder-Mead method
Dear UseRs, Is there a way to define the lower-upper bounds for parameters fitted by optim using the Nelder-Mead method ? Thanks, Arnaud [[alternative HTML version deleted]]
2009 Oct 10
2
Nelder-Mead with output of simplex vertices
Greetings! I want to follow the evolution of a Nelder-Mead function minimisation (a function of 2 variables). Hence each simplex will have 3 vertices. Therefore I would like to have a function which can output the coordinates of the 3 vertices after each new simplex is generated. However, there seems to be no way (which I can detect) of extracting this information from optim() (the
2005 Mar 18
1
Constrained Nelder-Mead
All, In looking at `optim', it doesn't appear that it is possible to impose nonlinear constraints on Nelder- Mead. I am sufficiently motivated to try to code something in C from scratch and try to call it from R.... Does anyone have some good references to barrier and/or penalization methods for Nelder-Mead? I would ideally like some papers with pseudocode for method(s) that are in
2012 Aug 18
1
Parameter scaling problems with optim and Nelder-Mead method (bug?)
Dear all, I?m having some problems getting optim with method="Nelder-Mead" to work properly. It seems like there is no way of controlling the step size, and the step size seems to depend on the *difference* between the initial values, which makes no sense. Example: f=function(xy, mu1, mu2) { print(xy) dnorm(xy[1]-mu1)*dnorm(xy[2]-mu2) } f1=function(xy) -f(xy, 0,
2017 Dec 31
1
Order of methods for optimx
Dear R-er, For a non-linear optimisation, I used optim() with BFGS method but it stopped regularly before to reach a true mimimum. It was not a problem with limit of iterations, just a local minimum. I was able sometimes to reach better minimum using several rounds of optim(). Then I moved to optimx() to do the different optim rounds automatically using "Nelder-Mead" and
2011 Nov 10
3
optim seems to be finding a local minimum
Hello! I am trying to create an R optimization routine for a task that's currently being done using Excel (lots of tables, formulas, and Solver). However, otpim seems to be finding a local minimum. Example data, functions, and comparison with the solution found in Excel are below. I am not experienced in optimizations so thanks a lot for your advice! Dimitri ### 2 Inputs:
2007 Jun 22
2
fitCopula
I am using R 2.5.0 on windows XP and trying to fit copula. I see the following code works for some users, however my code crashes on the chol. Any suggestions? > mycop <- tCopula(param=0.5, dim=8, dispstr="ex", df=5) > x <- rcopula(mycop, 1000) > myfit <- fitCopula(x, mycop, c(0.6, 10), optim.control=list(trace=1), method="Nelder-Mead")
2005 Nov 15
1
An optim() mystery.
I have a Master's student working on a project which involves estimating parameters of a certain model via maximum likelihood, with the maximization being done via optim(). A phenomenon has occurred which I am at a loss to explain. If we use certain pairs of starting values for optim(), it simply returns those values as the ``optimal'' values, although they are definitely not
2010 Sep 04
3
How can I fixe convergence=1 in optim
Hi R users, I am using the optim funciton to maximize a log likelihood function. My code is as follows: p<-optim(c(-0.2392925,0.4653128,-0.8332286, 0.0657, -0.0031, -0.00245, 3.366, 0.5885, -0.00008, 0.0786,-0.00292,-0.00081, 3.266, -0.3632, -0.000049, 0.1856, 0.00394, -0.00193, -0.889, 0.5379, -0.000063, 0.213, 0.00338, -0.00026, -0.8912, -0.3023, -0.000056), f,
2007 Jan 03
1
optim
Hi! I'm trying to figure out how to use optim... I get some really strange results, so I guess I got something wrong. I defined the following function which should be minimized: errorFunction <- function(localShifts,globalShift,fileName,experimentalPI,lambda) { lambda <- 1/sqrt(147) # error <- abs(errHuber(localShifts,globalShift, #
2008 Mar 11
1
messages from mle function
Dears useRs, I am using the mle function but this gives me the follow erros that I don't understand. Perhaps there is someone that can help me. thank you for you atention. Bernardo. > erizo <- read.csv("Datos_Stokes_1.csv", header = TRUE) > head(erizo) EDAD TALLA 1 0 7.7 2 1 14.5 3 1 16.9 4 1 13.2 5 1 24.4 6 1 22.5 > TAN <-
2023 Aug 13
4
Noisy objective functions
While working on 'random walk' applications, I got interested in optimizing noisy objective functions. As an (artificial) example, the following is the Rosenbrock function, where Gaussian noise of standard deviation `sd = 0.01` is added to the function value. fn <- function(x) (1+rnorm(1, sd=0.01)) * adagio::fnRosenbrock(x) To smooth out the noise, define another
2012 Oct 23
1
help using optim function
Hi, am very new to R and I've written an optim function, but can't get it to work least.squares.fitter<-function(start.params,gr,low.constraints,high.constraints,model.one.stepper,data,scale,ploton=F) { result<-optim(par=start.params,method=c('Nelder-Mead'),fn=least.squares.fit,lower=low.constraints,upper=high.constraints,data=data,scale=scale,ploton=ploton)
2020 Oct 28
2
R optim() function
Hi R-Help, I am using R to do functional outlier detection (using PCA to reduce to 2 dimensions - the functional boxplot methodology used in the Rainbow package), and using Hscv.diag function to calculate the bandwidth matrix where this line of code is run: result <- optim(diag(Hstart), scv.mat.temp, method = "Nelder-Mead", control = list(trace = as.numeric(verbose))) Within the
2005 Jul 19
2
Michaelis-menten equation
Dear R users: I encountered difficulties in michaelis-menten equation. I found that when I use right model definiens, I got wrong Km vlaue, and I got right Km value when i use wrong model definiens. The value of Vd and Vmax are correct in these two models. #-----right model definiens-------- PKindex<-data.frame(time=c(0,1,2,4,6,8,10,12,16,20,24),
2009 Nov 30
3
Question about output from optim
Dear R-users, I am trying to port to R something that I wrote in Matlab to perform model parameter optimization using the Nelder-Mead simplex method (fminsearch). I read the help on ?optim (which seems to be the way to go) as well as a bunch of posts on the topic, but I would like to make sure about something before I spend to much time trying to reproduce something that is not possible. The
2005 Mar 23
4
non-derivative based optimization and standard errors.
Hi AlL, I ahve this problem that my objective function is discontinous in the paramaters and I need to use methods such as nelder-mead to get around this. My question is: How do i compute standard errors to a problem that does not have a gradient? Any literature on this is greatly appreciated. Jean,
2012 Nov 03
2
optim & .C / Crashing on run
Hello, I am attempting to use optim under the default Nelder-Mead algorithm for model fitting, minimizing a Chi^2 statistic whose value is determined by a .C call to an external shared library compiled from C & C++ code. My problem has been that the R session will immediately crash upon starting the simplex run, without it taking a single step. This is strange, as the .C call itself works,
2006 Aug 09
2
optim error
Dear all, There have been one or two questions posted to the list regarding the optim error "non-finite finite-difference value [4]." The error apparently means that the 4th element of the gradient is non-finite. My question is what part(s) of my program should I fiddle with in an attempt to fix it? Starting values? Something in the log-likelihood itself? Perhaps the data