Displaying 20 results from an estimated 4000 matches similar to: "Question about output from optim"
2006 Jun 12
1
r's optim vs. matlab's fminsearch
Hi,
I'm having a problem converting a Matlab program into R. The R code works
almost all the time, but about 4% of the time R's optim function gets stuck
on a local minimum whereas matlab's fminsearch function does not (or at
least fminsearch finds a better minimum than optim). My understanding is
that both functions default to Nelder-Mead optimization, but what's
different about
2010 Mar 05
2
Improved Nelder-Mead algorithm - a potential replacement for optim's Nelder-Mead
Hi,
I have written an R translation of C.T. Kelley's Matlab version of the Nelder-Mead algorithm. This algorithm is discussed in detail in his book "Iterative methods for optimization" (SIAM 1999, Chapter 8). I have tested this relatively extensively on a number of smooth and non-smooth problems. It performs well, in general, and it almost always outperforms optim's
2010 Sep 04
3
How can I fixe convergence=1 in optim
Hi R users,
I am using the optim funciton to maximize a log likelihood function. My
code is as follows:
p<-optim(c(-0.2392925,0.4653128,-0.8332286, 0.0657, -0.0031, -0.00245,
3.366, 0.5885, -0.00008,
0.0786,-0.00292,-0.00081, 3.266, -0.3632, -0.000049, 0.1856,
0.00394, -0.00193, -0.889, 0.5379, -0.000063,
0.213, 0.00338, -0.00026, -0.8912, -0.3023, -0.000056), f,
2005 Nov 15
1
An optim() mystery.
I have a Master's student working on a project which involves
estimating parameters of a certain model via maximum likelihood,
with the maximization being done via optim().
A phenomenon has occurred which I am at a loss to explain.
If we use certain pairs of starting values for optim(), it
simply returns those values as the ``optimal'' values, although
they are definitely not
2010 May 06
0
Release of optimbase, optimsimplex and neldermead packages
Dear R users,
I am pleased to announce the release of three new R packages: optimbase,
optimsimplex, and neldermead.
- optimbase provides a set of commands to manage an abstract optimization
method. The goal is to provide a building block for a large class of
specialized optimization methods. This package manages: the number of
variables, the minimum and maximum bounds, the number of non linear
2010 May 06
0
Release of optimbase, optimsimplex and neldermead packages
Dear R users,
I am pleased to announce the release of three new R packages: optimbase,
optimsimplex, and neldermead.
- optimbase provides a set of commands to manage an abstract optimization
method. The goal is to provide a building block for a large class of
specialized optimization methods. This package manages: the number of
variables, the minimum and maximum bounds, the number of non linear
2009 Oct 10
2
Nelder-Mead with output of simplex vertices
Greetings!
I want to follow the evolution of a Nelder-Mead function
minimisation (a function of 2 variables). Hence each simplex
will have 3 vertices.
Therefore I would like to have a function which can output
the coordinates of the 3 vertices after each new simplex
is generated. However, there seems to be no way (which I can
detect) of extracting this information from optim() (the
2012 Nov 03
2
optim & .C / Crashing on run
Hello,
I am attempting to use optim under the default Nelder-Mead algorithm for
model fitting, minimizing a Chi^2 statistic whose value is determined by a
.C call to an external shared library compiled from C & C++ code.
My problem has been that the R session will immediately crash upon starting
the simplex run, without it taking a single step.
This is strange, as the .C call itself works,
2012 May 01
2
Define lower-upper bound for parameters in Optim using Nelder-Mead method
Dear UseRs,
Is there a way to define the lower-upper bounds for parameters fitted by
optim using the Nelder-Mead method ?
Thanks,
Arnaud
[[alternative HTML version deleted]]
2010 Feb 08
2
evolution of Nelder-Mead process
Dear list,
I am looking for an R-only implementation of a Nelder-Mead process that can find local maxima of a spatially distributed variable, e.g. height, on a spatial grid, and outputs the coordinates of the new point during each evaluation. I have found two previous threads about this topic, and was wondering if something similar has been implemented since those messages were posted.
Thank
2007 Jan 03
1
optim
Hi!
I'm trying to figure out how to use optim... I get some really strange results, so I guess I got something wrong.
I defined the following function which should be minimized:
errorFunction <- function(localShifts,globalShift,fileName,experimentalPI,lambda)
{
lambda <- 1/sqrt(147)
# error <- abs(errHuber(localShifts,globalShift,
#
2012 Oct 23
1
help using optim function
Hi, am very new to R and I've written an optim function, but can't get it to
work
least.squares.fitter<-function(start.params,gr,low.constraints,high.constraints,model.one.stepper,data,scale,ploton=F)
{
result<-optim(par=start.params,method=c('Nelder-Mead'),fn=least.squares.fit,lower=low.constraints,upper=high.constraints,data=data,scale=scale,ploton=ploton)
2020 Oct 28
2
R optim() function
Hi R-Help,
I am using R to do functional outlier detection (using PCA to reduce to 2 dimensions - the functional boxplot methodology used in the Rainbow package), and using Hscv.diag function to calculate the bandwidth matrix where this line of code is run:
result <- optim(diag(Hstart), scv.mat.temp, method = "Nelder-Mead", control = list(trace = as.numeric(verbose)))
Within the
2006 Aug 09
2
optim error
Dear all,
There have been one or two questions posted to the list regarding the optim
error "non-finite finite-difference value [4]." The error apparently means
that the 4th element of the gradient is non-finite. My question is what
part(s) of my program should I fiddle with in an attempt to fix it?
Starting values? Something in the log-likelihood itself? Perhaps the data
2017 Dec 31
1
Order of methods for optimx
Dear R-er,
For a non-linear optimisation, I used optim() with BFGS method but it
stopped regularly before to reach a true mimimum. It was not a problem
with limit of iterations, just a local minimum. I was able sometimes to
reach better minimum using several rounds of optim().
Then I moved to optimx() to do the different optim rounds automatically
using "Nelder-Mead" and
2000 Nov 30
3
Optimisation methods
I don't want to re-invent the wheel, and I'm trying to code up something
that does a Nelder-Mead simplex method to minimise a non-linear objective
function. (I'm porting something I originally wrote in matlab, using the
optimisation toolbox funciton fmins).
Is there already something available to do this included in R?
Do people have suggestions on the best way to do this?
Thanks,
2009 Dec 10
1
obtain intermediate estimate using optim
Hi,
Currently I am trying to solve a minimization problem using optim as method Nelder-Mead. However, Neldel-Mead needs many iterations until it finally converges. I have set $control.trace and $control.report such that I can see the value of the function at each iteration. I do see that I set the convergence criteria to strict in the sense that the function value does not change much. However,
2012 Apr 24
1
Use of optim to fit two curves at the same time ?
Dear list,
Here is a small example code that use optim and optimize in order to fit
two functions.
Is it possible to fit two functions (like those two for example) at the
same time using optim ... or another function in R ?
Thanks
Arnaud
######################################################################
## function 1
x1 <- 1:100
y1 <- 5.468 * x + 3 # + rnorm(100,0, 10)
dfxy <-
2012 Aug 18
1
Parameter scaling problems with optim and Nelder-Mead method (bug?)
Dear all,
I?m having some problems getting optim with method="Nelder-Mead" to work
properly. It seems like there is no way of controlling the step size,
and the step size seems to depend on the *difference* between the
initial values, which makes no sense. Example:
f=function(xy, mu1, mu2) {
print(xy)
dnorm(xy[1]-mu1)*dnorm(xy[2]-mu2)
}
f1=function(xy) -f(xy, 0,
2011 Aug 31
3
How to modify the dot-dot-dot argument using level names instead of position
Dear R-users,
In the R internals manual, it is said that one can extract the
elements of the dot-dot-dot argument using the special symbols ..1 or
..2. It seems to work just fine but I was wondering if there is a way
one can extract or modify the content of the dot-dot-dot argument
using a level name instead of its position?
For instance, assuming that list(...) returns:
$a
[1] 1 2 3 4 5