Displaying 20 results from an estimated 4000 matches similar to: "maximum likelihood estimation of 5 parameters"
2007 Jan 11
1
maximum likelihood, 1st and 2nd derivative
Hi guys again, it seems I haven't been doing the maximum likelihood
estimation correctly. I quote below, can someone explain to me please what
does it mean that the 2nd and 3rd derivatives of the function equals zero
and how to compute that in R.
"We have our initial estimated, subjective parameters for the gamma mixture
and we have our likelihood that is the mixture of negative
2005 May 20
1
using src/Makevars file
Hi all,
Thanks to all who offered advice on using F95 in R.
Now I'm trying to compile a test package using gfortran, Linux 2.4.21 and
R 2.1.0.
I was able to successfully compile and use a test F95 routine by setting my
environment variables as follows in bash:
export PATH=~/bin/:$PATH
export F77=gfortran
export LD_LIBRARY_PATH=~/bin/irun/lib
export GFORTRAN_STDIN_UNIT=-1
Now I'm
2008 Jan 07
7
Can R solve this optimization problem?
Dear All,
I am trying to solve the following maximization problem with R:
find x(t) (continuous) that maximizes the
integral of x(t) with t from 0 to 1,
subject to the constraints
dx/dt = u,
|u| <= 1,
x(0) = x(1) = 0.
The analytical solution can be obtained easily, but I am trying to
understand whether R is able to solve numerically problems like this
one. I have tried to find an
2010 Dec 07
1
Using nlminb for maximum likelihood estimation
I'm trying to estimate the parameters for GARCH(1,1) process.
Here's my code:
loglikelihood <-function(theta) {
h=((r[1]-theta[1])^2)
p=0
for (t in 2:length(r)) {
h=c(h,theta[2]+theta[3]*((r[t-1]-theta[1])^2)+theta[4]*h[t-1])
p=c(p,dnorm(r[t],theta[1],sqrt(h[t]),log=TRUE))
}
-sum(p)
}
Then I use nlminb to minimize the function loglikelihood:
nlminb(
2008 Jun 16
1
Error in maximum likelihood estimation.
Dear UseRs,
I wrote the following function to use MLE.
---------------------------------------------
mlog <- function(theta, nx = 1, nz = 1, dt){
beta <- matrix(theta[1:(nx+1)], ncol = 1)
delta <- matrix(theta[(nx+2):(nx+nz+1)], ncol = 1)
sigma2 <- theta[nx+nz+2]
gamma <- theta[nx+nz+3]
y <- as.matrix(dt[, 1], ncol = 1)
x <- as.matrix(data.frame(1,
2007 Apr 09
1
R:Maximum likelihood estimation using BHHH and BFGS
Dear R users,
I am new to R. I would like to find *maximum likelihood estimators for psi
and alpha* based on the following *log likelihood function*, c is
consumption data comprising 148 entries:
fn<-function(c,psi,alpha)
{
s1<-sum(for(i in 1:n){(c[i]-(psi^(-1/alpha)*(lag(c[i],-1))))^2*
(lag(c[i],-1)^((-2)*(alpha+1))
)});
s2<- sum(for(m in 1:n){log(lag(c[m],-1)^(((2)*alpha)+2))});
2008 Aug 12
2
Maximum likelihood estimation
Hello,
I am struggling for some time now to estimate AR(1) process for commodity price time series. I did it in STATA but cannot get a result in R.
The equation I want to estimate is: p(t)=a+b*p(t-1)+error
Using STATA I get 0.92 for a, and 0.73 for b.
Code that I use in R is:
p<-matrix(data$p) # price at time t
lp<-cbind(1,data$lp) # price at time t-1
2007 Oct 29
1
How to test combined effects?
Suppose I have a mixed-effects model where yij is the jth sample for
the ith subject:
yij= beta0 + beta1(age) + beta2(age^2) + beta3(age^3) + beta4(IQ) +
beta5(IQ^2) + beta6(age*IQ) + beta7(age^2*IQ) + beta8(age^3 *IQ)
+random intercepti + eij
In R how can I get an F test against the null hypothesis of
beta6=beta7=beta8=0? In SAS I can run something like contrast age*IQ
1,
2009 May 08
1
creation of a matrix
Hi all,
I have a relative large amount (several thousand rows, but a small
amount of unique objects) of data in a format like this:
1 text_string
1 text_string
1 text_string
2 text_string
2 text_string
3 text_string
3 text_string
3 text_string
3 text_string
3 text_string
.
.
.
n text_string
I want to create an n x p matrix, n objects (=40) and p unique text
strings. Nij is number of occurrences
2008 Jun 18
1
Maximum Likelihood Estimation
Using R, I would like to calculate algorithms to estimate coefficients á and â within the gamma function: f(costij)=((costij)^á)*exp(â*costij). I have its logarithmic diminishing line data (Logarithmic Diminishing Line Data Table) and have installed R¢s Maximum Likelihood Estimation package; however, I am unsure which method to apply in order to calculate the algorisms (i.e., Newton-Raphson
2010 Dec 09
1
survival: ridge log-likelihood workaround
Dear all,
I need to calculate likelihood ratio test for ridge regression. In February I have reported a bug where coxph returns unpenalized log-likelihood for final beta estimates for ridge coxph regression. In high-dimensional settings ridge regression models usually fail for lower values of lambda. As the result of it, in such settings the ridge regressions have higher values of lambda (e.g.
2003 Jan 17
2
Negative Binomial modelling
I have some data which I am trying to fit with a negative binomial
distribution. I have found the glm.nb function from MASS.
I have reason to believe that the mean parameter mu depends on
certain factors, and that the shape parameter theta depends on
others.
If, say, the factors are P and Q, it might be that
mu ~ P:Q and theta ~ P
(where mu ~ P:Q means that mu is a function of the pair (P,Q))
2011 Sep 19
2
Poisson-Gamma computation (parameters and likelihood)
Good afternoon/morning readers. This is the first time I am trying to run
some Bayesian computation in R, and am experiencing a few problems.
I am working on a Poisson model for cancer rates which has a conjugate Gamma
prior.
1) The first question is precisely how I work out the parameters.
#Suppose I assign values to theta with *seq()*
*theta<-seq(0,1,len=500)*
#Then I try out the
2005 Nov 15
1
An optim() mystery.
I have a Master's student working on a project which involves
estimating parameters of a certain model via maximum likelihood,
with the maximization being done via optim().
A phenomenon has occurred which I am at a loss to explain.
If we use certain pairs of starting values for optim(), it
simply returns those values as the ``optimal'' values, although
they are definitely not
2007 Jun 15
0
Question with nlm
Hi,
I would really appreciate if I could get some help here. I'm using nlm to minimize my negative log likelihood function. What I did is as follows:
My log likelihood function (it returns negative log likelihood) with 'gradient' attribute defined inside as follows:
# ==========Method definition======================
logLikFunc3 <- function(sigma, object, totalTime) {
y <-
2011 May 12
2
DCC-GARCH model and AR(1)-GARCH(1,1) regression model
Hello,
I have a rather complex problem... I will have to explain everything in
detail because I cannot solve it by myself...i just ran out of ideas. So
here is what I want to do:
I take quotes of two indices - S&P500 and DJ. And my first aim is to
estimate coefficients of the DCC-GARCH model for them. This is how I do it:
library(tseries)
p1 = get.hist.quote(instrument =
2007 Dec 12
0
IRT Likelihood problem
I have the following item response theory (IRT) likelihood that I want
to maximize w.r.t. to theta (student ability).
L(\theta) = \prod(p(x))
Where p(x) is the 3-parameter logistic model when items are scored
dichotomously (x_{ij} = 0 or 1) and p(x) is Muraki's generalized partial
credit model when items are scored polytomously (x_{ij} = 0 \ldots J).
Now, I wrote the following two functions
2011 Mar 28
1
maximum likelihood accuracy - comparison with Stata
Hi everyone,
I am looking to do some manual maximum likelihood estimation in R. I
have done a lot of work in Stata and so I have been using output
comparisons to get a handle on what is happening.
I estimated a simple linear model in R with lm() and also my own
maximum likelihood program. I then compared the output with Stata.
Two things jumped out at me.
Firstly, in Stata my coefficient
2009 Jul 19
1
trouble using optim for maximalisation of 2-parameter function
Hello, I am having trouble using "optim".
I want to maximalise a function to its parameters [kind of like: univariate
maximum likelihood estimation, but i wrote the likelihood function myself
because of data issues ]
When I try to optimize a function for only one parameter there is no
problem:
llik.expo<-function(x,lam){(length(x)*log(lam))-(length(x)*log(1-exp(-1*lam*
2012 Sep 07
2
metafor package: study level variation
Hello. A quick question about incorporating variation due to study in the metafor package. I'm working with a particular data set for meta-analysis where some studies have multiple measurements. Others do not. So, let's say the effect I'm looking at is response to two different kinds of drug treatment - let's call their effect sizes T1 and T2. Some studies have multiple