similar to: help on glmmML

Displaying 20 results from an estimated 1000 matches similar to: "help on glmmML"

2007 Aug 12
0
question on glmmML compared to NLMIXED
Hello! Can anyone help me. I am using the posterior.mode from the result of glmmML. It apears to be different from the BLUe estimate of the RANDOM statement in PROC NLMIXED in SAS. Why is that? Thank you Ronen [[alternative HTML version deleted]]
2006 Aug 21
1
New version of glmmML
A new version, 0.65-1, of glmmML is now on CRAN. It is a major rewrite of the inner structures, so frequent updates (bug fixes) may be expected for some time. News: * The Laplace and adaptive Gauss-Hermite approximations to the log likelihood function are fully implemented. The Laplace method is made the default. It should give results you can compare to the results from 'lmer' (for the
2006 Aug 21
1
New version of glmmML
A new version, 0.65-1, of glmmML is now on CRAN. It is a major rewrite of the inner structures, so frequent updates (bug fixes) may be expected for some time. News: * The Laplace and adaptive Gauss-Hermite approximations to the log likelihood function are fully implemented. The Laplace method is made the default. It should give results you can compare to the results from 'lmer' (for the
2006 Jul 12
0
glmmML updated
I have uploaded a new version (0.30-2) of glmmML to CRAN today. This is a rather extensive upgrade, mostly internal. Adaptive Gauss-Hermite quadrature (GHQ) is now used for the evaluation of the integrals in the log likelihood function. The user can choose the number of points (default is 16), I _think_ that choosing 1 point will result in a Laplace approximation. The integrals in the score and
2006 Jul 12
0
glmmML updated
I have uploaded a new version (0.30-2) of glmmML to CRAN today. This is a rather extensive upgrade, mostly internal. Adaptive Gauss-Hermite quadrature (GHQ) is now used for the evaluation of the integrals in the log likelihood function. The user can choose the number of points (default is 16), I _think_ that choosing 1 point will result in a Laplace approximation. The integrals in the score and
2007 May 08
3
ordered logistic regression with random effects. Howto?
I'd like to estimate an ordinal logistic regression with a random effect for a grouping variable. I do not find a pre-packaged algorithm for this. I've found methods glmmML (package: glmmML) and lmer (package: lme4) both work fine with dichotomous dependent variables. I'd like a model similar to polr (package: MASS) or lrm (package: Design) that allows random effects. I was
2005 Oct 12
0
Mixed model for negative binomial distribution (glmm.ADMB)
Dear R-list, I thought that I would let some of you know of a free R package, glmm.ADMB, that can handle mixed models for overdispersed and zero-inflated count data (negativebinomial and poisson). It was built using AD Model Builder software (Otter Research) for random effects modeling and is available (for free and runs in R) at: http://otter-rsch.com/admbre/examples/glmmadmb/glmmADMB.html I
2004 Jun 14
1
glmmML package
I'm trying to use the glmmML package on a Windows machine. When I try to install the package, I get the message: > {pkg <- select.list(sort(.packages(all.available = TRUE))) + if(nchar(pkg)) library(pkg, character.only=TRUE)} Error in dyn.load(x, as.logical(local), as.logical(now)) : unable to load shared library
2006 Mar 08
1
Want to fit random intercept in logistic regression (testing lmer and glmmML)
Greetings. Here is sample code, with some comments. It shows how I can simulate data and estimate glm with binomial family when there is no individual level random error, but when I add random error into the linear predictor, I have a difficult time getting reasonable estimates of the model parameters or the variance component. There are no clusters here, just individual level responses, so
2010 Jan 23
1
(nlme, lme, glmmML, or glmmPQL)mixed effect models with large spatial data sets
Hi, I have a spatial data set with many observations (~50,000) and would like to keep as much data as possible. There is spatial dependence, so I am attempting a mixed model in R with a spherical variogram defining the correlation as a function of distance between points. I have tried nlme, lme, glmmML, and glmmPQL. In all case the matrix needed (seems to be (N^2)/2 - N) is too large for my
2006 Jun 28
0
New version of glmmML (p-values!)
A new version of 'glmmML' (0.28-4) is uploaded to CRAN. The most important new feature is the possibility to get a p-value for the test of the hypothesis that the variance of the random effects is zero, on the wishlist of many R users these days! Note two things: (i) glmmML only treats random intercepts for binomial and poisson models, (ii) the p-value is calculated thru bootstrapping
2006 Jun 28
0
New version of glmmML (p-values!)
A new version of 'glmmML' (0.28-4) is uploaded to CRAN. The most important new feature is the possibility to get a p-value for the test of the hypothesis that the variance of the random effects is zero, on the wishlist of many R users these days! Note two things: (i) glmmML only treats random intercepts for binomial and poisson models, (ii) the p-value is calculated thru bootstrapping
2011 Jun 22
2
error using glmmML()
Dear all, This question is basic but I am stumped. After running the below, I receive the message: "non-integer #successes in a binomial glm!" model1 <- glmmML(y~Brood.Size*Density+Date.Placed+Species+Placed.Emerging+Year+rate.of.parperplot, data = data, cluster= data$Patch, family=binomial(link="logit")) My response variable is sex ratio, and I have learned quickly not
2006 Aug 21
0
R-packages posting guide (was: Re: [R-pkgs] New version of glmmML)
Maybe an R-packages posting guide with an example and an automatic append of a one or two line summary at the end of each article posted - as already done on r-help. On 8/21/06, Martin Maechler <maechler at stat.math.ethz.ch> wrote: > Hi G?ran, > > >>>>> "GB" == G?ran Brostr?m <goran.brostrom at gmail.com> > >>>>> on Mon, 21 Aug
2006 Aug 22
1
a generic Adaptive Gauss Quadrature function in R?
Hi there, I am using SAS Proc NLMIXED to maximize a likelihood with multivariate normal random effects. An example is the two part random effects model for repeated measures semi-continous data with a cluster at 0. I use the "model y ~ general(loglike)" statement in Proc NLMIXED, so I can specify a general log likelihood function constructed by SAS programming statements. Then the
2007 Jul 18
1
filter out observation by condition
hello, I have a longitudinal data: idn mort30 newinfec 1 0 1 1 0 1 1 0 1 1 0 1 2 1 1 2 1 1 2 1 1 3 0 0 3 0 0 3 0 0 3 0 0 3
2008 Nov 19
1
F-Tests in generalized linear mixed models (GLMM)
Hi! I would like to perform an F-Test over more than one variable within a generalized mixed model with Gamma-distribution and log-link function. For this purpose, I use the package mgcv. Similar tests may be done using the function "anova", as for example in the case of a normal distributed response. However, if I do so, the error message "error in eval(expr, envir, enclos) :
2009 Jan 23
1
predict function problem for glmmPQL
Hi all, I am using cross-validation to validate a generalized linear mixed effects model fitted using glmmPQL. i found that the predict function has a problem and i wonder if anyone has encountered the same problem? glmm1 = glmmPQL(y~aX+b,random=~1|sample,data=traindata) predict(glmm1,newdata=testdata,level=1,type="response") gives me all "NA"s. it works for level=0 (the
2004 Nov 09
1
Some questions to GLMM
Hello all R-user I am relative new to the R-environment and also to GLMM, so please don't be irritated if some questions don't make sense. I am using R 2.0.0 on Windows 2000. I investigated the occurrence of insects (count) in different parts of different plants (plantid) and recorded as well some characteristics of the plant parts (e.g. thickness). It is an unbalanced design with 21
2004 Nov 01
1
GLMM
Hello, I have a problem concerning estimation of GLMM. I used methods from 3 different packages (see program). I would expect similar results for glmm and glmmML. The result differ in the estimated standard errors, however. I compared the results to MASS, 4th ed., p. 297. The results from glmmML resemble the given result for 'Numerical integration', but glmm output differs. For the