similar to: maxLik package

Displaying 20 results from an estimated 6000 matches similar to: "maxLik package"

2009 May 16
1
maxLik pakage
Hi all; I recently have been used 'maxLik' function for maximizing G2StNV178 function with gradient function gradlik; for receiving this goal, I write the following program; but I have been seen an error  in calling gradient  function; The maxLik function can't enter gradlik function (definition of gradient function); I guess my mistake is in line ******** ,that the vector  ‘h’ is
2009 Mar 02
1
initial gradient and vmmin not finite
Dear Rhelpers I have the problem with initial values, could you please tell me how to solve it? Thank you June > p = summary(maxLik(fr,start=c(0,0,0,1,0,-25,-0.2))) Error in maxRoutine(fn = logLik, grad = grad, hess = hess, start = start, : NA in the initial gradient > p = summary(maxLik(fr,start=c(0,0,0,1,0,-25,-0.2),method="BFGS")) Error in optim(start, func, gr =
2010 May 10
2
Robust SE & Heteroskedasticity-consistent estimation
Hi, I'm using maxlik with functions specified (L, his gradient & hessian). Now I would like determine some robust standard errors of my estimators. So I 'm try to use vcovHC, or hccm or robcov for example but in use one of them with my result of maxlik, I've a the following error message : Erreur dans terms.default(object) : no terms component Is there some attributes
2010 Mar 12
2
Question regarding to maxNR
Hi R-users, Recently, I use maxNR function to find maximizer. I have error appears as follows Error in maxNRCompute(fn = fn, grad = grad, hess = hess, start = start, : NA in the initial gradient My code is mu=2 s=1 n=300 library(maxLik) set.seed(1004) x<-rcauchy(n,mu,s) loglik<-function(mu) { log(prod(dcauchy(x,mu,s))) } maxNR(loglik,start=median(x))$estimate Does anyone know how
2009 Apr 03
2
Geometric Brownian Motion Process with Jumps
Hi, I have been using maxLik to do some MLE of Geometric Brownian Motion Process and everything has been going fine, but know I have tried to do it with jumps. I have create a vector of jumps and then added this into my log-likelihood equation, know I am getting a message: NA in the initial gradient My codes is hear # n<-length(combinedlr) j<-c(1,2,3,4,5,6,7,8,9,10)
2011 May 03
3
help with the maxBHHH routine
Hello R community, I have been using R's inbuilt maximum likelihood functions, for the different methods (NR, BFGS, etc). I have figured out how to use all of them except the maxBHHH function. This one is different from the others as it requires an observation level gradient. I am using the following syntax: maxBHHH(logLik,grad=nuGradient,finalHessian="BHHH",start=prm,iterlim=2)
2020 Oct 08
0
[External] Re: unable to access index for repository...
Oh Hi Arne, You may recall we visited with this before. I do not believe the problem is algorithm specific. The algorithms I use the most often are BFGS and BHHH (or maxBFGS and maxBHHH). For simple econometric models such as probit, Tobit, and evening sample selection models, old and new versions of R work equally well (I write my own programs and do not use ones from AER or sampleSekection).
2020 Oct 09
1
[External] Re: unable to access index for repository...
>>>>> Steven Yen >>>>> on Fri, 9 Oct 2020 05:39:48 +0800 writes: > Oh Hi Arne, You may recall we visited with this before. I > do not believe the problem is algorithm specific. The > algorithms I use the most often are BFGS and BHHH (or > maxBFGS and maxBHHH). For simple econometric models such > as probit, Tobit, and evening
2020 Oct 08
2
[External] Re: unable to access index for repository...
Hi Steven Which optimisation algorithms in maxLik work better under R-3.0.3 than under the current version of R? /Arne On Thu, 8 Oct 2020 at 21:05, Steven Yen <styen at ntu.edu.tw> wrote: > > Hmm. You raised an interesting point. Actually I am not having problems with aod per se?-it is just a supporting package I need while using old R. The essential package I need, maxLik, simply
2010 Sep 14
2
Can I monitor the iterative/convergence process while using Optim or MaxLik?
Hi R-helpers, Is it possible that I have the estimates from each step/iteration shown on the computer screen in order to monitor the process while I am using Optim or MaxLik? Thanks for your help. Maomao [[alternative HTML version deleted]]
2009 Apr 10
1
Re MLE Issues
Hi I have been having issue with a ML estimator for Jump diffusion process but know I am get little error I didn't notice before like I am try to create a vector > #GBMPJ MLE Combined Ph 1 LR > # > n<-length(combinedlrph1) > j<-c(1,2,3,4,5,6,7,8,9,10) Error in c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) : unused argument(s) (3, 4, 5, 6, 7, 8, 9, 10) >
2010 Aug 28
1
maxNR in maxLik package never stops
Greetings, I use maxNR function under maxLik package to find the REML estimates of the parameters of variance components in heteroskedastic linear regression models. I assume that in the model there is additive/multiplicative/mixed heteroskedasticity and I need estimate the respective parameters of additive/multiplicative/mixed variance components. For my research purposes I make a
2010 Mar 22
1
maxNR - Error in p(a, b) : element 1 is empty; the part of the args list of '*' being evaluated was: (b, t)
Hello everyone... We were trying to implement the Newton-Raphson method in R, and estimate the parameters a and b, of a function, F, however we can't seem to implement this the right way. Hope you can show me the right way to do this. I think what we want R to do is to read the data from the website and then peform maxNR on the function, F. Btw the version of R being used is "RGui for
2010 Oct 01
1
Place constrictions on parameters when using Optim and MaxLik
Hi R users, I am trying to restrct the range of two of the parameters in a maximization problem. Both parameters should be between -1 and 1. As far as I know, if I choose the estimation method ="L-BFGS-B" under Optim, I can restrict the parameter space. However, the "L-BFGS-B" always require finite values of the loglik function and cannot get around of the problem if an
2011 Apr 10
0
maxLik package.
Dear Sir/ Madam,   I have some enquiry in R about maxLik package where, in this package we have the usage  maxLik(logLik, grad, hess, start, method, iterlim, print.level)   when I used this with print.level equals to 3 I could have estimates of parameters at each iteration but I do not know how can I call the information in the level. Is there any way can help me to call the information within
2007 Jan 12
1
incorrect result of deriv (PR#9449)
Full_Name: Joerg Polzehl Version: 2.3.1 OS: x86_64, linux-gnu Submission from: (NULL) (62.141.176.22) I observed an incorrect behavior of function deriv when evaluating arguments of dnorm deriv(~dnorm(z,0,s),"z") expression({ .value <- dnorm(z, 0, s) .grad <- array(0, c(length(.value), 1), list(NULL, c("z"))) .grad[, "z"] <- -(z * dnorm(z))
2010 Apr 06
2
Extracting formulae from expression() / deriv()
I am attempting to extract the derivative/ gradient from this expression df1p <- deriv(f1, "P") > df1p expression({ .value <- s - c - a * P .grad <- array(0, c(length(.value), 1L), list(NULL, c("P"))) .grad[, "P"] <- -a attr(.value, "gradient") <- .grad .value }) So in this case I want to extract the "-a".
2009 Oct 29
4
deriv() to take vector of expressions as 1st arg?
The deriv() function takes an 'expression' as its first argument). I was wondering if the this function can take an array or a vector of expressions as its first argument. Aside, I saw how to give a vector argument to the second argument. like to have something like: deriv(c(~x^2+y^3, ~x^5+y^6), c("x","y")) the documentation for this function talks about being able to
2020 Oct 08
0
[External] Re: unable to access index for repository...
Hmm. You raised an interesting point. Actually I am not having problems with aod per se?-it is just a supporting package I need while using old R. The essential package I need, maxLik, simply works better under R-3.0.3, for reason I do not understand?specifically the numerical gradients of the likelihood function are not evaluated as accurately in newer versions of R in my experience, which is why
2008 Mar 27
1
A faster way to compute finite-difference gradient of a scalar function of a large number of variables
Hi All, I would like to compute the simple finite-difference approximation to the gradient of a scalar function of a large number of variables (on the order of 1000). Although a one-time computation using the following function grad() is fast and simple enough, the overhead for repeated evaluation of gradient in iterative schemes is quite significant. I was wondering whether there are