Displaying 10 results from an estimated 10 matches for "beta_j".
Did you mean:
beta_1
2002 Mar 29
1
help with lme function
Hi all,
I have some difficulties with the lme function and so this is my problem.
Supoose i have the following model
y_(ijk)=beta_j + e_i + epsilon_(ijk)
where beta_j are fixed effects, e_i is a random effect and
epsilon_(ijk) is the error.
If i want to estimate a such model, i execute
>lme(y~vec.J , random~1 |vec .I )
where y is the vector of my data, vec.J is a factor object and vec.I
is the vector for the i indic...
2009 Nov 29
1
optim or nlminb for minimization, which to believe?
...s the total number of individuals, $K$ indexes the total number of items, $p(x|\theta,\beta)$ is the data likelihood and $f(\theta)$ is a population distribution. For the rasch model, the data likelihood is:
\begin{equation}
p(x|\theta,\beta) = \prod^N_{i=1}\prod^K_{j=1} \Pr(x_{ij} = 1 | \theta_i, \beta_j)^{x_{ij}} \left[1 - \Pr(X_{ij} = 1 | \theta_i, \beta_j)\right]^{(1-x_{ij})}
\end{equation}
\begin{equation}
\label{rasch}
\Pr(x_{ij} = 1 | \theta_i, \beta_j) = \frac{1}{1 + e^{-(\theta_i-\beta_j)}} \quad i = (1, \ldots, K); j = (1, \ldots, N)
\end{equation}
\noindent where $\theta_i$ is the ability...
2006 Mar 01
2
inconsistency between anova() and summary() of glmmPQL
Dear All,
Could anyone explain me how it is possible that one factor
in a glmmPQL model is non-significant according to the
anova() function, whereas it turns out to be significant
(or at least some of its levels differ significantly from
some other levels) according to the summary() function.
What is the truth, which results shall I believe? And, is
there any other way of testing for the
2002 Feb 20
2
How to get the penalized log likelihood from smooth.spline()?
...= \int f''(t)^2 dt is the quadratic roughness
functional use. Since J(f) is quadratic one can find a matrix \Sigma such
that J(g) = c^T{\Sigma}c where c is the vector of spline coefficients.
With J(f) defined as above the elements of \Sigma becomes
\Sigma_{ij} = \int \beta_i''(t)\beta_j''(t) dt
where \beta(t) is the vector of B-spline base functions. Finally, writing
the matrix W as W := diag(\sqrt{w}) one can write L(f) as
L(f) = (y - f)^T W^2 (y - f) + \lambda c^T{\Sigma}c
which is the form used in help(smooth.spline). So back to my question,
using smooth.spline(),...
2007 Jan 20
1
aov y lme
...ults in Montgomery D.C (2001, chap 13,
example 13-1).
Briefly, there are three suppliers, four batches nested within suppliers
and three determinations of purity (response variable) on each batch. It is
a two stage nested design, where suppliers are fixed and batches are random.
y_ijk=mu+tau_i+beta_j(nested in tau_i)+epsilon_ijk
Here are the data,
purity<-c(1,-2,-2,1,
-1,-3, 0,4,
0,-4, 1, 0,
1,0,-1,0,
-2,4,0,3,
-3,2,-2,2,
2,-2,1,3,
4,0,-1,2,
0,2,2,1)
suppli<-factor(c(rep(1,12),rep(2,12),rep(3,12)))
b...
2007 Jan 19
0
(no subject)
...ults in Montgomery D.C (2001, chap 13,
example 13-1).
Briefly, there are three suppliers, four batches nested within suppliers
and three determinations of purity (response variable) on each batch. It is
a two stage nested design, where suppliers are fixed and batches are random.
y_ijk=mu+tau_i+beta_j(nested in tau_i)+epsilon_ijk
Here are the data,
purity<-c(1,-2,-2,1,
-1,-3, 0,4,
0,-4, 1, 0,
1,0,-1,0,
-2,4,0,3,
-3,2,-2,2,
2,-2,1,3,
4,0,-1,2,
0,2,2,1)
suppli<-factor(c(rep(1,12),rep(2,12),rep(3,12)))
b...
2007 Aug 10
0
half-logit and glm (again)
I know this has been dealt with before on this list, but the previous
messages lacked detail, and I haven't figured it out yet.
The model is:
\x_{ij} = \mu + \alpha_i + \beta_j
\alpha is a random effect (subjects), and \beta is a fixed effect
(condition).
I have a link function:
p_{ij} = .5 + .5( 1 / (1 + exp{ -x_{ij} } ) )
Which is simply a logistic transformed to be between .5 and 1.
The data y_{ij} ~ Binomial( p_{ij}, N_{ij} )
I've generated data using this...
2001 Oct 17
3
Type III sums of squares.
...any actual interest.
This would go much further toward bringing the desciple to true
enlightenment.
Point 3 --- what hypothesis is being tested by SSA?
Let factor A correspond to index i, and B to index j.
Let the cell means be mu_ij. (In the overparameterized
notation, mu_ij = mu + alpha_i + beta_j + gamma_ij.)
The hypothesis being tested is
H_0: mu_1.-bar = mu_2.-bar = ... = mu_a.-bar
where factor A has a levels, and ``mu_i.-bar'' means
the average (arithmetic mean) of mu_i1, mu_i2, ..., mu_ib.
(Note --- factor B has b levels.)
I.e. the hypothesis is that there is no difference...
2013 May 29
1
quick question about glm() example
I don't have a copy of Dobson (1990) from which the glm.D93 example is
taken in example("glm"), but I'm strongly suspecting that these are
made-up data rather than real data; the means of the responses within
each treatment are _identical_ (equal to 16 2/3), so two of the
parameters are estimated as being zero (within machine tolerance). (At
this moment I don't understand
2006 May 20
5
Can lmer() fit a multilevel model embedded in a regression?
I would like to fit a hierarchical regression model from Witte et al.
(1994; see reference below). It's a logistic regression of a health
outcome on quntities of food intake; the linear predictor has the form,
X*beta + W*gamma,
where X is a matrix of consumption of 82 foods (i.e., the rows of X
represent people in the study, the columns represent different foods,
and X_ij is the amount of