Displaying 20 results from an estimated 2000 matches similar to: "Optimize multiple variable sets"
2010 Nov 03
3
optim works on command-line but not inside a function
Dear all,
I am trying to optimize a logistic function using optim, inside the
following functions:
#Estimating a and b from thetas and outcomes by ML
IRT.estimate.abFromThetaX <- function(t, X, inits, lw=c(-Inf,-Inf),
up=rep(Inf,2)){
optRes <- optim(inits, method="L-BFGS-B", fn=IRT.llZetaLambdaCorrNan,
gr=IRT.gradZL,
lower=lw, upper=up, t=t, X=X)
2011 Apr 08
1
Estimates at each iteration
Dear Sir\ Madam
I am trying to maximise a complicated loglikelihood function via EM algorithm
which consists of two step "E-step and M-step"and in this case I need to
maximize the expected of like lihood function " which I get from E- step" and
take those estimates of parameter to update the E-step and repate these till
convargence has been met. So I need to know if I
2010 Dec 22
2
Fitting a Triangular Distribution to Bivariate Data
Hello,
I have some xy data which clearly shows a non-monotonic, peaked
triangular trend. You can get an idea of what it looks like with:
x<-1:20
y<-c(2*x[1:10]+1,-2*x[11:20]+42)
I've tried fitting a quadratic, but it just doesn't the data-structure
with the break point adequately. Is there anyway to fit a triangular
or 'tent' function to my data in R?
Some sample code
2005 Mar 28
2
Generating list of vector coordinates
Hi.
Can anyone suggest a simple way to obtain in R a list of vector
coordinates of the following form? The code below is Mathematica.
In[5]:=
Flatten[Table[{i,j,k},{i,3},{j,4},{k,5}], 2]
Out[5]=
{{1,1,1},{1,1,2},{1,1,3},{1,1,4},{1,1,5},{1,2,1},{1,2,2},{1,2,3},{1
,2,4},{1,2,
5},{1,3,1},{1,3,2},{1,3,3},{1,3,4},{1,3,5},{1,4,1},{1,4,2},{1,4,3},
{1,4,
2010 Dec 07
1
Using nlminb for maximum likelihood estimation
I'm trying to estimate the parameters for GARCH(1,1) process.
Here's my code:
loglikelihood <-function(theta) {
h=((r[1]-theta[1])^2)
p=0
for (t in 2:length(r)) {
h=c(h,theta[2]+theta[3]*((r[t-1]-theta[1])^2)+theta[4]*h[t-1])
p=c(p,dnorm(r[t],theta[1],sqrt(h[t]),log=TRUE))
}
-sum(p)
}
Then I use nlminb to minimize the function loglikelihood:
nlminb(
2009 Sep 04
2
transforming a badly organized data base into a list of data frames
Dear R-ers!
I have a badly organized data base in Excel. Once I read it into R it
looks like this (all variables become factors because of many spaces
and other characters in Excel):
2011 May 12
1
Maximization of a loglikelihood function with double sums
Dear R experts,
Attached you can find the expression of a loglikelihood function which I
would like to maximize in R.
So far, I have done maximization with the combined use of the
mathematical programming language AMPL (www.ampl.com) and the solver
SNOPT (http://www.sbsi-sol-optimize.com/manuals/SNOPT%20Manual.pdf).
With these tools, maximization is carried out in a few seconds. I wonder
if that
2011 Jun 08
2
Results of CFA with Lavaan
I've just found the lavaan package, and I really appreciate it, as it
seems to succeed with models that were failing in sem::sem. I need
some clarification, however, in the output, and I was hoping the list
could help me.
I'll go with the standard example from the help documentation, as my
problem is much larger but no more complicated than that.
My question is, why is there one latent
2012 Nov 30
2
NA return to NLM routine
Hello,
I am trying to understand a small quirk I came across in R. The
following code results in an error:
k <- c(2, 1, 1, 5, 5)
f <- c(1, 1, 1, 3, 2)
loglikelihood <- function(theta,k,f){
if( theta<1 && theta>0 )
return(-1*sum(log(choose(k,f))+f*log(theta)+(k-f)*log(1-theta)))
return(NA)
}
nlm(loglikelihood ,0.5, k, f )
Running this code results in:
Error
2007 May 24
3
Problem with numerical integration and optimization with BFGS
Hi R users,
I have a couple of questions about some problems that I am facing with
regard to numerical integration and optimization of likelihood
functions. Let me provide a little background information: I am trying
to do maximum likelihood estimation of an econometric model that I have
developed recently. I estimate the parameters of the model using the
monthly US unemployment rate series
2011 Apr 15
3
GLM output for deviance and loglikelihood
It has always been my understanding that deviance for GLMs is defined
by;
D = -2(loglikelihood(model) - loglikelihood(saturated model))
and this can be calculated by (or at least usually is);
D = -2(loglikelihood(model))
As is done so in the code for 'polr' by Brian Ripley (in the package
'MASS') where the -loglikehood is minimised using optim;
res <-
2008 Apr 18
2
rzinb (VGAM) and dnbinom in optim
Dear R-help gurus (and T.Yee, the VGAM maintainer) -
I've been banging my head against the keyboard for too long now, hopefully someone can pick up on the errors of my ways...
I am trying to use optim to fit a zero-inflated negative binomial distribution. No matter what I try I can't get optim to recognize my initial parameters. I think the problem is that dnbinom allows either
2001 Aug 01
1
glm() with non-integer responses
A question about the inner workings of glm() and dpois():
Suppose I call
glm(y ~ x, family=poisson, weights = w)
where y contains NON-INTEGER (but still nonnegative) values.
(a) Does glm() still correctly maximise
the weighted Poisson loglikelihood ?
(i.e. the function given by the same formal expression as the
weighted loglikelihood of independent Poisson variables Y_i
except that the
2006 Jan 09
1
trouble with extraction/interpretation of variance structure para meters from a model built using gnls and varConstPower
I have been using gnls with the weights argument (and varConstPower) to
specify a variance structure for curve fits. In attempting to extract the
parameters for the variance model I am seeing results I don't understand.
When I simply display the model (or use "summary" on the model), I get what
seem like reasonable values for both "power" and "const". When I
2003 May 25
2
assign() won't work
Hey everyone, I've been searching the mail lists, and I can't find a real discussion about my problem. Here it is:
I have created a loop fitting various time series models to my data. I labeled each one of the outputs by using the assign and paste statements, i.e. assign(paste("group","subgroup",i),arima(...)). Works great, but here's what I need...
I want to
2007 Mar 15
1
expm() within the Matrix package
Hi
Could anybody give me a bit of advice on some code I'm having trouble with?
I've been trying to calculate the loglikelihood of a function iterated over
a data of time values and I seem to be experiencing difficulty when I use
the function expm(). Here's an example of what I am trying to do
y<-c(5,10) #vector of 2 survival times
p<-Matrix(c(1,0),1,2) #1x2 matrix
2010 Jul 08
2
Using nlm or optim
Hello,
I am trying to use nlm to estimate the parameters that minimize the
following function:
Predict<-function(M,c,z){
+ v = c*M^z
+ return(v)
+ }
M is a variable and c and z are parameters to be estimated.
I then write the negative loglikelihood function assuming normal errors:
nll<-function(M,V,c,z,s){
n<-length(Mean)
logl<- -.5*n*log(2*pi) -.5*n*log(s) -
2008 Sep 09
1
Genmod in SAS vs. glm in R
Hello,
I have different results from these two softwares for a simple binomial GLM
problem.
>From Genmod in SAS: LogLikelihood=-4.75, coeff(intercept)=-3.59,
coeff(x)=0.95
>From glm in R: LogLikelihood=-0.94, coeff(intercept)=-3.99, coeff(x)=1.36
Is there anyone tell me what I did wrong?
Here are the code and results,
1) SAS Genmod:
% r: # of failure
% k: size of a risk set
data
2008 Mar 19
1
betabinomial model
Hi,
can anyone help me fit betabinomial model to the following dataset where
each iD is a cluster in itself , if i use package aod 's betabinom model it
gives an estimate of zero to phi(the correlation coeficient ) and if i fix
it to the anova type estimate obtained from icc( in package aod) then it
says system is exactly singular. And when i try to fit my loglikelihood by
2010 Jul 20
1
Servreg $loglik
Dear R-experts:
I am using survreg() to estimate the parameters of a Weibull density having
right-censored observations. Some observations are weighted. To do that I
regress the weighed observations against a column of ones.
When I enter the data as 37 weighted observations, the parameter estimates
are exactly the same as when I enter the data as the corresponding 70
unweighted observations.