Displaying 20 results from an estimated 10000 matches similar to: "messing with ..."
2006 Jun 23
1
How to use mle or similar with integrate?
Hi
I have the following formula (I hope it is clear - if no, I can try to
do better the next time)
h(x, a, b) =
integral(0 to pi/2)
(
(
integral(D/sin(alpha) to Inf)
(
(
f(x, a, b)
)
dx
)
dalpha
)
and I want to do an mle with it.
I know how to use mle() and I also know about integrate(). My problem is
to give the parameter values a and b to the
2005 Sep 06
2
fitting distributions with R
Dear all
I've got the dataset
data:2743;4678;21427;6194;10286;1505;12811;2161;6853;2625;14542;694;11491;
?? ?? ?? ?? ?? 14924;28640;17097;2136;5308;3477;91301;11488;3860;64114;14334
I know from other testing that it should be possible to fit the data with the
exponentialdistribution. I tried to get parameterestimates for the
exponentialdistribution with R, but as the values
of the parameter
2008 Jul 05
3
Editing the "..." argument
Dear all,
I'd like tweaking the ... arguments that one user can pass in my
function for fitting a model. More precisely, my objective function is
(really) problematic to optimize using the "optim" function.
Consequently, I'd like to add in the "control" argument of the latter
function a "ndeps = rep(something, #par)" and/or "parscale =
2010 Mar 01
2
Advice wanted on using optim with both continuous and discrete par arguments...
Dear R users,
I have a problem for which my objective function depends on both discrete and continuous arguments.
The problem is that the number of combinations for the (multivariate) discrete arguments can become overwhelming (when it is univariate this is not an issue) hence search over the continuous arguments for each possible combination of the discrete arguments may not be feasible. Guided
2013 Apr 09
5
Error when using fitdist function in R
Hello everyone,
I was trying to do some distribution fitting with a numerical field called
Tolls. The sample size = 999 rows.
Basically I assigned the Toll data to a new variable K by doing:
k<-dtest$Toll
After that, tried to fit a gamma distribution by doing: fitG<-fitdist(k,
"gamma")
Then the following messages showed (oh and I checked for empty rows before
doing this):
2011 May 25
1
L-BFGS-B and parscale in optim()
Hi,
When using method L-BFGS-B along with a parscale argument, should the
lower and upper bounds provided be on the scaled or unscaled values?
Thanks.
Cheers,
--
Seb
2012 Nov 15
1
hessian fails for box-constrained problems when close to boundary?
Hi
I am trying to recover the hessian of a problem optimised with
box-constraints. The problem is that in some cases, my estimates are very
close to the boundary, which will make optim(..., hessian=TRUE) or
optimHessian() fail, as they do not follow the box-constraints, and hence
estimate the function in the unfeasible parameter space.
As a simple example (my problem is more complex though,
2004 Sep 13
2
Problem with mle in stats4 (R 1.9.1)
Hi!
This is a repost of an earlier message (with a clearer example
demonstrating the problem I ran into). If you run the mle example in
stats4
library(stats4)
x <- 0:10
y <- c(26, 17, 13, 12, 20, 5, 9, 8, 5, 4, 8)
ll <- function(ymax=15, xhalf=6)
-sum(stats::dpois(y, lambda=ymax/(1+x/xhalf), log=TRUE))
(fit <- mle(ll))
plot(profile(fit),
2008 Mar 13
3
Use of ellipses ... in argument list of optim(), integrate(), etc.
Hi,
I have noticed that there is a change in the use of ellipses or . in R
versions 2.6.1 and later. In versions 2.5.1 and earlier, the . were always
at the end of the argument list, but in 2.6.1 they are placed after the main
arguments and before method control arguments. This results in the user
having to specify the exact (complete) names of the control arguments, i.e.
partial matching is
2008 Mar 23
2
scaling problems in "optim"
Dear R users,
I am trying to figure out the control parameter in "optim," especially,
"fnscale" and "parscale."
In the R docu.,
------------------------------------------------------
fnscale
An overall scaling to be applied to the value of fn and gr during
optimization. If negative, turns the problem into a maximization problem.
Optimization is performed on
2011 Sep 27
2
Error in optim function.
I'm trying to calculate the maximum likelihood estimate for a binomial
distribution. Here is my code:
y <- c(2, 4, 2, 4, 5, 3)
n <- length(y)
binomial.ll <- function (pi, y, n) { ## define log-likelihood
output <- y*log(pi)+(n-y)*(log(1-pi))
return(output)
}
binomial.mle <- optim(0.01, ## starting value
binomial.ll,
2003 Jul 16
2
numerical differentiation in R? (for optim "SANN" parscale)
Dear R users,
I am running a maximum likelihood model with optim. I chose the
simulated annealing method (method="SANN").
SANN is not performing bad, but I guess it would be much more effecive
if I could set the `parscale' parameter.
The help sais:
`parscale' A vector of scaling values for the parameters.
Optimization is performed on `par/parscale' and these
2008 Apr 05
2
How to improve the "OPTIM" results
Dear R users,
I used to "OPTIM" to minimize the obj. function below. Even though I used
the true parameter values as initial values, the results are not very good.
How could I improve my results? Any suggestion will be greatly appreciated.
Regards,
Kathryn Lord
#------------------------------------------------------------------------------------------
x = c(0.35938587,
2008 Apr 05
2
How to improve the "OPTIM" results
Dear R users,
I used to "OPTIM" to minimize the obj. function below. Even though I used
the true parameter values as initial values, the results are not very good.
How could I improve my results? Any suggestion will be greatly appreciated.
Regards,
Kathryn Lord
#------------------------------------------------------------------------------------------
x = c(0.35938587,
2006 May 01
1
Problem with optim()
I am having a problem with optim() using the "L-BFGS-B" method. When I
set the lower limit for the third parameter equal to zero I get an
error message:
> low.lim.3 <- 0
> phi_opt <- optim(phi_, model_lik, NULL, method = "L-BFGS-B", lower=c(0.2, -100, low.lim.3, 0), upper= c(10, 100, 10, 10), control = list(maxit = 1000, parscale = c(0.2, u1, 0.002, 0.002), trace =
2002 Apr 22
3
glm() function not finding the maximum
Hello,
I have found a problem with using the glm function with a gamma
family.
I have a vector of data, assumed to be generated by a gamma distribution.
The parameters of this gamma distribution are estimated in two ways (i)
using the glm() function, (ii) "by hand", using the optim() function.
I find that the -2*likelihood at the maximum found by (i) is substantially
larger than that
2003 Feb 28
2
optim
Dear all,
I have a function MYFUN which depends on 3 positive parameters TETA[1],
TETA[2], and TETA[3]; x belongs to [0,1].
I integrate the function over [0,0.1], [0.1,0.2] and
[0.2,0.3] and want to choose the three parameters so that
these three integrals are as close to, resp., 2300, 4600 and 5800 as
possible. As I have three equations with three unknowns, I expect the
exact fit, i.e., the SS
2010 Jul 05
3
selection of optim parameters
Hi all,
I am trying to rebuild the results of a study using a different data
set. I'm using about 450 observations. The code I've written seems to
work well, but I have some troubles minimizing the negative of the
LogLikelyhood function using 5 free parameters.
As starting values I am using the result of the paper I am rebuiling.
The system.time of the calculation of the function is
2001 Apr 09
4
fastest R platform
Hello, everyone! I picked up R several months ago and have adopted it
as my choice for statistical programming. Coming from a Java
background, I can honestly say that R is not only free, it is better
tha S-plus: the lexical scope in R makes it very simple to simulate
Java's object model. For this, I encourage everyone to read the artcle:
Robert Gentleman and Ross Ihaka (2000) "Lexical
2006 Feb 02
2
how to use mle?
>Y
[,1] [,2] [,3]
[1,] 0 1 0
[2,] 0 1 0
[3,] 0 0 1
[4,] 1 0 0
[5,] 0 0 1
[6,] 0 0 1
[7,] 1 0 0
[8,] 1 0 0
[9,] 0 0 1
[10,] 1 0 0
>X
pri82 pan82
1 0 0
2 0 0
3 1 0
4 1 0
5 0 1
6 0 0
7 1 0
8 1 0
9 0 0
10