Displaying 20 results from an estimated 8000 matches similar to: "supplying gradient to constrOptim()"
2009 Nov 18
1
bug in '...' of constrOptim (PR#14071)
Dear all,
There appears to be a bug in how constrOptim handles ... arguments that
are suppose to be passed to optim, according to the documentation. This
means you can't get the hessian to be returned, for example (so this is
a real problem, and not just a question of mistaken documentation).
Looking at the code, it appears that a call to the user-defined f
includes the ..., when the ...
2023 Mar 31
1
Query: Could documentation include modernized references?
>>>>> Duncan Murdoch
>>>>> on Sun, 26 Mar 2023 12:41:03 -0400 writes:
> On 26/03/2023 11:54 a.m., J C Nash wrote:
>> A tangential email discussion with Simon U. has
>> highlighted a long-standing matter that some tools in the
>> base R distribution are outdated, but that so many
>> examples and other tools may use
2011 Jul 13
2
Very slow optim()
Dear list,
I am using optim() function to MLE ~55 parameters, but it is very slow to converge (~ 25 min), whereas I can do the same in ~1 sec. using ADMB, and ~10 sec using MS EXCEL Solver.
Are there any tricks to speed up?
Are there better optimization functions?
Thanks
Toshihide "Hamachan" Hamazaki, $B_@:j=S=((JPhD
Alaska Department of Fish and Game:
2017 Dec 31
1
Order of methods for optimx
Dear R-er,
For a non-linear optimisation, I used optim() with BFGS method but it
stopped regularly before to reach a true mimimum. It was not a problem
with limit of iterations, just a local minimum. I was able sometimes to
reach better minimum using several rounds of optim().
Then I moved to optimx() to do the different optim rounds automatically
using "Nelder-Mead" and
2006 Feb 28
3
any more direct-search optimization method in R
Hello list,
I am dealing with a noisy function (gradient,hessian not available) with
simple boundary constraints (x_i>0). I've tried constrOptim() using nelder
mead to minimize it but it is way too slow and the returned results are not
satisfying. simulated annealing is so hard to tune and it always crashes R
program in my case. I wonder if there are any packages or functions can do
2011 Aug 29
3
gradient function in OPTIMX
Dear R users
When I use OPTIM with BFGS, I've got a significant result without an error
message. However, when I use OPTIMX with BFGS( or spg), I've got the
following an error message.
----------------------------------------------------------------------------------------------------
> optimx(par=theta0, fn=obj.fy, gr=gr.fy, method="BFGS",
>
2004 Jul 14
0
Re: [R] constrOptim and function with additional parameters? (PR#7088)
I've moved this from r-help to r-bugs. If you reply, please be
careful that replies go to the right place: r-bugs if your comment is
specifically about the bug (and it contains the PR# in the subject
that will be added when this is cc'd to r-devel), r-devel if general
discussion, not both.
On Wed, 14 Jul 2004 10:01:45 -0400, "Roger D. Peng" <rpeng@jhsph.edu>
wrote :
2006 Aug 09
2
optim error
Dear all,
There have been one or two questions posted to the list regarding the optim
error "non-finite finite-difference value [4]." The error apparently means
that the 4th element of the gradient is non-finite. My question is what
part(s) of my program should I fiddle with in an attempt to fix it?
Starting values? Something in the log-likelihood itself? Perhaps the data
2004 Jul 14
0
Re: [R] constrOptim and function with additional parameters? (PR#7089)
Okay, looking at the docs, then it's not a bug, since the "..."
argument is not actually documented as "other arguments passed to f or
grad". However, that *is* how it's document in `optim', so one can
see how this might cause some confusion.
Now, it's not clear to me which other arguments need to be passed to
`optim' except perhaps `hessian'. Am
2019 May 06
2
R optim(method="L-BFGS-B"): unexpected behavior when working with parent environments
Optim's Nelder-Mead works correctly for this example.
> optim(par=10, fn=fn, method="Nelder-Mead")
x=10, ret=100.02 (memory)
x=11, ret=121 (calculate)
x=9, ret=81 (calculate)
x=8, ret=64 (calculate)
x=6, ret=36 (calculate)
x=4, ret=16 (calculate)
x=0, ret=0 (calculate)
x=-4, ret=16 (calculate)
x=-4, ret=16 (memory)
x=2, ret=4 (calculate)
x=-2, ret=4 (calculate)
x=1, ret=1
2008 Apr 23
1
BB - a new package for solving nonlinear system of equations and for optimization with simple constraints
Hi,
We (Paul Gilbert and I) have just released a new R package on CRAN called
"BB" (stands for Barzilai-Borwein) that provides functionality for solving
large-scale (and small-scale) nonlinear system of equations. Until now, R
didn't have any functionality for solving nonlinear systems. We hope that
this package fills that need.
We also have an implementation of the
2008 Mar 05
6
box-constrained
Un texte encapsul? et encod? dans un jeu de caract?res inconnu a ?t? nettoy?...
Nom : non disponible
Url : https://stat.ethz.ch/pipermail/r-help/attachments/20080305/80536e8c/attachment.pl
2009 May 15
1
About the efficiency of R optimization function
Hi all!
The objective function I want to minimize contains about 10 to 20 variables,
maybe more in the future. I never solved such problems in R, so I had no
idea about the efficiency of R's optimization functions. I know doing loop
in R is quite slow, so I am not sure whether this shortage influences the
speed of R's optimization functions.
I would be very appreciated if anyone could
2009 Apr 23
6
Stuck using constrOptim
Trying to use constrOptim to minimize the sum of squared deviations. I put
the objective function in as: sum((x %*% Y - Z)^2) so i'm trying to get
values for x to minimize the sum of the squared deviations between the
product of x and Y and Z.
Anyways i have no problem using this when x is a 3x1 test variable. it
works great with the constraints and everything. when i actually use it on
2009 Sep 11
1
constrOptim parameters
Dear R wizards: I am playing (and struggling) with the example in the
constrOptim function. simple example. let's say I want to constrain my
variables to be within -1 and 1. I believe I want a whole lot of
constraints where ci is -1 and ui is either -1 or 1. That is, I have 2*N
constraints. Should the following work?
N=10
x= rep(1:N)
ci= rep(-1, 2*N)
ui= c(rep(1, N), rep(-1, N))
2007 Jan 03
1
optim
Hi!
I'm trying to figure out how to use optim... I get some really strange results, so I guess I got something wrong.
I defined the following function which should be minimized:
errorFunction <- function(localShifts,globalShift,fileName,experimentalPI,lambda)
{
lambda <- 1/sqrt(147)
# error <- abs(errHuber(localShifts,globalShift,
#
2019 May 03
2
R optim(method="L-BFGS-B"): unexpected behavior when working with parent environments
Yes, I think you are right. I was at first confused by the fact that after the optim() call,
> environment(fn)$xx
[1] 10
> environment(fn)$ret
[1] 100.02
so not 9.999, but this could come from x being assigned the final value without calling fn.
-pd
> On 3 May 2019, at 11:58 , Duncan Murdoch <murdoch.duncan at gmail.com> wrote:
>
> Your results below make it look like a
2010 Sep 04
3
How can I fixe convergence=1 in optim
Hi R users,
I am using the optim funciton to maximize a log likelihood function. My
code is as follows:
p<-optim(c(-0.2392925,0.4653128,-0.8332286, 0.0657, -0.0031, -0.00245,
3.366, 0.5885, -0.00008,
0.0786,-0.00292,-0.00081, 3.266, -0.3632, -0.000049, 0.1856,
0.00394, -0.00193, -0.889, 0.5379, -0.000063,
0.213, 0.00338, -0.00026, -0.8912, -0.3023, -0.000056), f,
2007 Apr 05
2
Likelihood returning inf values to optim(L-BFGS-B) other options?
Dear R-help list,
I am working on an optimization with R by evaluating a likelihood
function that contains lots of Gamma calculations (BGNBD: Hardie Fader
Lee 2005 Management Science). Since I am forced to implement lower
bounds for the four parameters included in the model, I chose the
optim() function mith L-BFGS-B as method. But the likelihood often
returns inf-values which L-BFGS-B
2007 Sep 05
3
'singular gradient matrix’ when using nls() and how to make the program skip nls( ) and run on
Dear friends.
I use nls() and encounter the following puzzling problem:
I have a function f(a,b,c,x), I have a data vector of x and a vectory y of
realized value of f.
Case1
I tried to estimate c with (a=0.3, b=0.5) fixed:
nls(y~f(a,b,c,x), control=list(maxiter = 100000, minFactor=0.5
^2048),start=list(c=0.5)).
The error message is: "number of iterations exceeded maximum of