Displaying 20 results from an estimated 25 matches for "satterthwaites".
Did you mean:
satterthwaite's
2007 Mar 29
3
Tail area of sum of Chi-square variables
Dear R experts,
I was wondering if there are any R functions that give the tail area
of a sum of chisquare distributions of the type:
a_1 X_1 + a_2 X_2
where a_1 and a_2 are constants and X_1 and X_2 are independent chi-square variables with different degrees of freedom.
Thanks,
Klaus
--
"Feel free" - 5 GB Mailbox, 50 FreeSMS/Monat ...
2005 Oct 25
1
Confidence Intervals for Mixed Effects
I'm fairly new to R and am wondering if anybody knows of R code to
calculate confidence intervals for parameters (fixed effects and variance
components) from mixed effects models based on Sattherthwaite's method?
I'm also interested in Satterthwaite-based confidence intervals for linear
combinations (mostly sums) of various variance components.
[[alternative HTML version
2002 Oct 05
1
Welch versus Satterthwaith (PR#2111)
This is not a bug report but didn't see another way to ask a question.
For the approximate t-test assuming unequal variances, the R docs cite
Welch's method for the df of the approximating distribution.
I have several methods books, and they all uses Satterthwaite's method.
Why does R use Welch's method where can I learn about Welch's method?
Sincerely,
David Allen
2002 Nov 13
0
Welch versus Satterthwaith (PR#2111)
>>>>> "TL" == Thomas Lumley <tlumley@u.washington.edu>
>>>>> on Sun, 6 Oct 2002 09:19:27 -0700 (PDT) writes:
TL> On Sat, 5 Oct 2002 roxburg@kih.net wrote:
>> This is not a bug report but didn't see another way to
>> ask a question.
TL> Well, you could try the r-help or r-devel mailing lists
>> For
2004 Nov 22
1
Questions of Significance Analysis of Microarrays(SAM){siggenes}
Dear All:
Significance Analysis of Microarrays(SAM)
As we know sam do multiple t.test as following
## Default S3 method:
t.test(x, y = NULL, alternative = c("two.sided", "less", "greater"),mu = 0,
paired = FALSE, var.equal = FALSE,conf.level = 0.95, ...)
var.equal: a logical variable indicating whether to treat the two variances
as being equal. If 'TRUE'
2005 Apr 24
1
random interactions in lme
Hi All,
I'm taking an Experimental Design course this semester, and have spent
many long hours trying to coax the professor's SAS examples into
something that will work in R (I'd prefer that the things I learn not
be tied to a license). It's been a long semester in that regard.
One thing that has really frustrated me is that lme has an extremely
counterintuitive way for
2017 Nov 29
0
How to extract coefficients from sequential (type 1), ANOVAs using lmer and lme
(This time with the r-help in the recipients...)
Be careful when mixing lme4 and lmerTest together -- lmerTest extends
and changes the behavior of various lme4 functions.
>From the help page for lme4-anova (?lme4::anova.merMod)
> ?anova?: returns the sequential decomposition of the contributions
> of fixed-effects terms or, for multiple arguments, model
>
2013 Jan 09
0
[solved] t-test behavior given that the null hypothesis is true
Hi Ted,
yes this was the problem. Thank you very much.
best
idaios
On Wed, Jan 9, 2013 at 4:51 PM, Ted Harding <Ted.Harding@wlandres.net>wrote:
> Ah! You have aqssigned a parameter "equal.var=TRUE", and "equal.var"
> is not a listed paramater for t.test() -- see ?t.test :
>
> t.test(x, y = NULL,
> alternative = c("two.sided",
2017 Dec 01
0
How to extract coefficients from sequential (type 1), ANOVAs using lmer and lme
Please reread my point #1: the tests of the (individual) coefficients in
the model summary are not the same as the ANOVA tests. There is a
certain correspondence between the two (i.e. between the coding of your
categorical variables and the type of sum of squares; and for a model
with a single predictor, F=t^2), but they are not the same in general.
The t-test in the model coefficients is simply
2010 Sep 20
3
Depletion of small p values upon iterative testing of identical normal distributions
Dear all,
I'm performing a t-test on two normal distributions with identical mean &
standard deviation, and repeating this tests a very large number of times to
describe an representative p value distribution in a null case. As a part of
this, the program bins these values in 10 evenly distributed bins between 0
and 1 and reports the number of observations in each bin. What I have
noticed
2024 May 05
2
lmer error: number of observations <= number of random effects
I am running a multilevel growth curve model to examine predictors of
social anhedonia (SA) trajectory through ages 12, 15 and 18. SA is a
continuous numeric variable. The age variable (Index1) has been coded as 0
for age 12, 1 for age 15 and 2 for age 18. I am currently using a time
varying predictor, stress (LSI), which was measured at ages 12, 15 and 18,
to examine whether trajectory/variation
2024 May 05
2
lmer error: number of observations <= number of random effects
I am running a multilevel growth curve model to examine predictors of
social anhedonia (SA) trajectory through ages 12, 15 and 18. SA is a
continuous numeric variable. The age variable (Index1) has been coded as 0
for age 12, 1 for age 15 and 2 for age 18. I am currently using a time
varying predictor, stress (LSI), which was measured at ages 12, 15 and 18,
to examine whether trajectory/variation
2016 Jul 27
0
new package clubSandwich: Cluster-Robust (Sandwich) Variance Estimators with Small-Sample Corrections
Dear R users:
I'm happy to announce the first CRAN release of the clubSandwich package:
https://cran.r-project.org/web/packages/clubSandwich
clubSandwich provides several variants of the cluster-robust variance
estimator for ordinary and weighted least squares linear regression models,
including the bias-reduced linearization estimator of Bell and McCaffrey
(2002). The package includes
2016 Jul 27
0
new package clubSandwich: Cluster-Robust (Sandwich) Variance Estimators with Small-Sample Corrections
Dear R users:
I'm happy to announce the first CRAN release of the clubSandwich package:
https://cran.r-project.org/web/packages/clubSandwich
clubSandwich provides several variants of the cluster-robust variance
estimator for ordinary and weighted least squares linear regression models,
including the bias-reduced linearization estimator of Bell and McCaffrey
(2002). The package includes
2009 Mar 30
1
Comparing Points on Two Regression Lines
Dear R users:
Suppose I have two different response variables y1, y2 that I regress separately on the different explanatory variables, x1 and x2 respectively. I need to compare points on two regression lines.
These are the x and y values for each lines.
x1<-c(0.5,1.0,2.5,5.0,10.0)
y1<-c(204,407,1195,27404313)
x2<-c(2.5,5.0,10.0,25.0)
y2<-c(440,713,1520,2634)
Suppose we need to
2006 Feb 22
1
Degree of freedom for contrast t-tests in lme
Dear all
Somebody may have asked this before but I could not find any answers in the web
so let me ask a question on lme.
When I have a fixed factor of, say, three levels (A, B, C), in which each level
has different size (i.e. no. of observations; e.g. A>B>C). When I run an lme
model, I get the same degree of freedom for all the contrast t-tests (e.g. AvsB
or BvsC). I have tried this to
2002 Mar 31
1
lme degrees of freedoms: SAS and R
Dear list,
I ran a mixed effect model using R 1.4.1 and SAS 8.0 on the SIMS data found
in the SASmixed package and found that the degrees of freedoms for fixed
effects are very different.
From R, df = n - v -1 where n is total # of observations, v is the # of
levels for the grouping factor. From SAS df = v -1. Am I wrong about this
or can somebody explain which is correct and why?
Thanks a
2007 Jun 05
1
lme vs. SAS proc mixed. Point estimates and SEs are the same, DFs are different
R 2.3
Windows XP
I am trying to understand lme. My aim is to run a random effects regression in which the intercept and jweek are random effects. I am comparing output from SAS PROC MIXED with output from R. The point estimates and the SEs are the same, however the DFs and the p values are different. I am clearly doing something wrong in my R code. I would appreciate any suggestions of how I can
2017 Dec 26
1
identifying convergence or non-convergence of mixed-effects regression model in lme4 from model output
Hi R community!
I've fitted three mixed-effects regression models to a thousand
bootstrap samples (case-resampling regression) using the lme4 package in
a custom-built for-loop. The only output I saved were the inferential
statistics for my fixed and random effects. I did not save any output
related to the performance to the machine learning algorithm used to fit
the models (REML=FALSE).
2024 May 06
0
[R-sig-ME] lmer error: number of observations <= number of random effects
Dear Srinidhi,
You are trying to fit 1 random intercept and 2 random slopes per
individual, while you have at most 3 observations per individual. You
simply don't have enough data to fit the random slopes. Reduce the random
part to (1|ID).
Best regards,
Thierry
ir. Thierry Onkelinx
Statisticus / Statistician
Vlaamse Overheid / Government of Flanders
INSTITUUT VOOR NATUUR- EN BOSONDERZOEK