similar to: lme - incorporating measurement error with estimated V-C matrix

Displaying 20 results from an estimated 700 matches similar to: "lme - incorporating measurement error with estimated V-C matrix"

2003 Oct 23
1
Variance-covariance matrix for beta hat and b hat from lme
Dear all, Given a LME model (following the notation of Pinheiro and Bates 2000) y_i = X_i*beta + Z_i*b_i + e_i, is it possible to extract the variance-covariance matrix for the estimated beta_i hat and b_i hat from the lme fitted object? The reason for needing this is because I want to have interval prediction on the predicted values (at level = 0:1). The "predict.lme" seems to
2007 Mar 09
1
help with zicounts
Dear UseRs: I have simulated data from a zero-inflated Poisson model, and would like to use a package like zicounts to test my code of fitting the model. My question is: can I use zicounts directly with the following simulated data? Create a sample of n=1000 observations from a ZIP model with no intercept and a single covariate x_{i} which is N(0,1). The logit part is logit(p_{i})=x_{i}*beta
2004 Apr 05
3
2 lme questions
Greetings, 1) Is there a nice way of extracting the variance estimates from an lme fit? They don't seem to be part of the lme object. 2) In a series of simulations, I am finding that with ML fitting one of my random effect variances is sometimes being estimated as essentially zero with massive CI instead of the finite value it should have, whilst using REML I get the expected value. I guess
2010 May 18
1
Maximization of quadratic forms
Dear R Help, I am trying to fit a nonlinear model for a mean function $\mu(Data_i, \beta)$ for a fixed covariance matrix where $\beta$ and $\mu$ are low- dimensional. More specifically, for fixed variance-covariance matrices $\Sigma_{z=0}$ and $\Sigma_{z=1}$ (according to a binary covariate $Z $), I am trying to minimize: $\sum_{i=1^n} (Y_i-\mu_(Data_i,\beta))' \Sigma_{z=z_i}^{-1} (Y_i-
2010 Feb 05
3
metafor package: effect sizes are not fully independent
In a classical meta analysis model y_i = X_i * beta_i + e_i, data {y_i} are assumed to be independent effect sizes. However, I'm encountering the following two scenarios: (1) Each source has multiple effect sizes, thus {y_i} are not fully independent with each other. (2) Each source has multiple effect sizes, each of the effect size from a source can be categorized as one of a factor levels
2007 Apr 12
1
LME: internal workings of QR factorization
Hi: I've been reading "Computational Methods for Multilevel Modeling" by Pinheiro and Bates, the idea of embedding the technique in my own c-level code. The basic idea is to rewrite the joint density in a form to mimic a single least squares problem conditional upon the variance parameters. The paper is fairly clear except that some important level of detail is missing. For
2007 Apr 12
0
LME: internal workings of QR factorization --repost
Hi: I've been reading "Computational Methods for Multilevel Modeling" by Pinheiro and Bates, the idea of embedding the technique in my own c-level code. The basic idea is to rewrite the joint density in a form to mimic a single least squares problem conditional upon the variance parameters. The paper is fairly clear except that some important level of detail is missing. For
2011 Jul 19
1
notation question
Dear list, I am currently writing up some of my R models in a more formal sense for a paper, and I am having trouble with the notation. Although this isn't really an 'R' question, it should help me to understand a bit better what I am actually doing when fitting my models! Using the analysis of co-variance example from MASS (fourth edition, p 142), what is the correct notation for the
2009 Jun 03
1
Using constrOptim() function
I have a function myFunction(beta,x) where beta is a vector of coefficients and x is a data frame (think of it as a matrix). I want to optimize the function myFunction() by ONLY changing beta, i.e. x stays constant, with 4 constraints. I have the following code (with a separate source file for the function): rm(list=ls()) source('mySourceFile')
2011 Apr 22
1
How to generate normal mixture random variables with given covariance function
Dear All, Suppose Z_i, i=1,...,m are marginally identically distributed as a two normal mixture p0*N(0,1) + (1-p0) *N( miu_i, 1) where miu_i are identically distributed according to a mixture and I have generated Z_i one by one . Now suppose these m random variables are jointly m-dimensional normal with correlation matrix M= (m_ij). How to proceed next or how to start correctly ? Question:
2018 Feb 16
2
[FORGED] Re: SE for all levels (including reference) of a factor atfer a GLM
On 16/02/18 15:28, Bert Gunter wrote: > This is really a statistical issue. What do you think the Intercept term > represents? See ?contrasts. > > Cheers, > Bert > > > > Bert Gunter > > "The trouble with having an open mind is that people keep coming along and > sticking things into it." > -- Opus (aka Berkeley Breathed in his "Bloom
2013 Feb 27
0
A program running for a too long time
Dear all, The attached code is supposed to minimize a numerical integration subject to a non linear constraint. The code runs for 2 days& more without giving an output. Also, when i change the value of "m<-100" to "m<-1" it gives an output in areasonable period but with a message " maximum number of iterations in romberg has been reached". I need to : 1-
2007 Jun 14
0
random effects in logistic regression (lmer)-- identification question
Hello R users! I've been experimenting with lmer to estimate a mixed model with a dichotomous dependent variable. The goal is to fit a hierarchical model in which we compare the effect of individual and city-level variables. I've run up against a conceptual problem that I expect one of you can clear up for me. The question is about random effects in the context of a model fit with a
2011 Jun 16
0
Update: Is there an implementation of loess with more than 3 parametric predictors or a trick to a similar effect?
Dear R developers! Considering I got no response or comments in the general r-help forum so far, perhaps my question is actually better suited for this list? I have added some more hopefully relevant technical details to my original post (edited below). Any comments gratefully received! Best regards, David Kreil. ---------- Dear R experts, I have a problem that is a related to the question
2013 Feb 18
2
error: Error in if (is.na(f0$objective)) { : argument is of length zero
Dear all, I tried running the following syntax but it keeps running for about 4 hours and then i got the following errors: Error in if (is.na(f0$objective)) { : argument is of length zero In addition: Warning message: In is.na(f0$objective) : is.na() applied to non-(list or vector) of type 'NULL' Here is the syntax itself: library('nloptr') library('pracma') #
2010 Jul 13
1
Batch file export
Dear all, I have a code that generates data vectors within R. For example assume: z <- rlnorm(1000, meanlog = 0, sdlog = 1) Every time a vector has been generated I would like to export it into a csv file. So my idea is something as follows: for (i in 1:100) { z <- rlnorm(1000, meanlog = 0, sdlog = 1) write.csv(z, "c:/z_i.csv") Where "z_i.csv" is a filename that is
2010 Nov 03
1
Orthogonalization with different inner products
Suppose one wanted to consider random variables X_1,...X_n and from each subtract off the piece which is correlated with the previous variables in the list. i.e. make new variables Z_i so that Z_1=X_1 and Z_i=X_i-cov(X_i,Z_1)Z_1/var(Z_1)-...- cov(X_i,Z__{i-1})Z__{i-1}/var(Z_{i-1}) I have code to do this but I keep getting a "non-conformable array" error in the line with the covariance.
2018 Jan 17
1
mgcv::gam is it possible to have a 'simple' product of 1-d smooths?
I am trying to test out several mgcv::gam models in a scalar-on-function regression analysis. The following is the 'hierarchy' of models I would like to test: (1) Y_i = a + integral[ X_i(t)*Beta(t) dt ] (2) Y_i = a + integral[ F{X_i(t)}*Beta(t) dt ] (3) Y_i = a + integral[ F{X_i(t),t} dt ] equivalents for discrete data might be: 1) Y_i = a + sum_t[ L_t * X_it * Beta_t ] (2) Y_i
2010 Sep 29
1
nlminb and optim
I am using both nlminb and optim to get MLEs from a likelihood function I have developed. AFAIK, the model I has not been previously used in this way and so I am struggling a bit to unit test my code since I don't have another data set to compare this kind of estimation to. The likelihood I have is (in tex below) \begin{equation} \label{eqn:marginal} L(\beta) = \prod_{s=1}^N \int
2010 Feb 06
1
Canberra distance
Hi the list, According to what I know, the Canberra distance between X et Y is : sum[ (|x_i - y_i|) / (|x_i|+|y_i|) ] (with | | denoting the function 'absolute value') In the source code of the canberra distance in the file distance.c, we find : sum = fabs(x[i1] + x[i2]); diff = fabs(x[i1] - x[i2]); dev = diff/sum; which correspond to the formula : sum[ (|x_i - y_i|) /