Displaying 17 results from an estimated 17 matches for "nonsignificance".
Did you mean:
insignificance
2007 Apr 09
1
testing differences between slope differences with lme
hello
i have a mixed effect model which gives slope and intercept terms for 6
groups (diagnosis (3 levels) by risk group(2 levels)). the fixed part of
the model is --
brain volume ~ Diagnosis + Risk Group + (Risk Group * age : Diagnosis) - 1
thus allowing risk group age/slope terms to vary within diagnosis and
omitting a nonsignificant diagnosis by risk group intercept (age was
centered)
2006 Jul 21
1
Parameterization puzzle
Consider the following example (based on an example in Pat Altham's GLM
notes)
pyears <- scan()
18793 52407 10673 43248 5710 28612 2585 12663 1462 5317
deaths <- scan()
2 32 12 104 28 206 28 186 31 102
Smoke <- gl(2,1,10,labels=c("No","Yes"))
Age <- gl(5,2,10,labels=c("35-44","45-54","55-64","65-74","75-84"),
2005 Sep 09
2
test for exponential,lognormal and gammadistribution
hello!
i don't want to test my sample data for normality, but exponential- lognormal-
or gammadistribution.
as i've learnt the anderson-darling-test in R is only for normality and i am
not supposed to use the kolmogorov-smirnov test of R for parameter estimates
from sample data, is that true?
can you help me, how to do this anyway!
thank you very much!
nadja
2004 Jun 30
1
linear models and colinear variables...
Hi!
I'm having some issues on both conceptual and
technical levels for selecting the right combination
of variables for this model I'm working on. The basic,
all inclusive form looks like
lm(mic ~ B * D * S * U * V * ICU)
Where mic, U, V, and ICU are numeric values and B D
and S are factors with about 16, 16 and 2 levels
respectively. In short, there's a ton of actual
explanatory
2004 Nov 03
0
Johnson-Neyman-procedure in R
...intercept. This is in many cases not what was
initially intended.
Instead, what one wants is perhaps to determine values of the covariate
at which the groups differ. I??ve seen a description of the
Johnson-Neyman procedure (in Huitema (1980)), which allows to determine
the so called regions of nonsignificance, which sounds a lot like what I
want. The problem is I have very seldom seen it used (at least in my
field of work), but unequal slopes is a common problem. (Searching the
R-help archives, however, didn??t give me a single match.) My first
question is therefore if the Johnson-Neyman procedure i...
2008 Aug 25
0
lme: Testing variance components
Hello,
I've been making a decent amount of use of the lme function
recently, and I am aware that I can extract the variance and correlation
components with the VarCorr(model) function. However, I am interested in
testing these components as well...is there a module or function available
for testing these?
I understand there's some debate as to which test is best for a
given situation
2009 Nov 09
1
Models
Hi all,
I hope that there might be some statistician out there to help me for a
possible explanation for the following simple question.
Y1~ lm(y~ t1 + t2 + t3 + t4 + t5,data=temp) # oridnary linear model
library(gam)
Y2~ gam(y~ lo(t1) +lo(t2) +lo(t3) +lo(t4) +lo(t5),data=temp) # additive
model
In the first model t1, t2 and t3 found to be significant,.
However, in the second model (using
2009 Dec 17
0
nonlinear (especially logistic) regression accounting for spatially correlated errors
Hello,
Sorry to be a bit longwinded, but I've struggled quite a bit with the following over the last few days. I've read all entries related to spatial autocorrelation in R help and haven't found what I'm after. If it's okay, I'm going to first describe my general understanding of the process by which a mixed model can account for correlated errors. If possible, please
2012 Feb 29
2
puzzling results from logistic regression
Hi all,
As you can see from below, the result is strange...
I would imagined that the bb result should be much higher and close to 1,
any way to improve the fit?
Any other classification methods?
Thank you!
data=data.frame(y=rep(c(0, 1), times=100), x=1:200)
aa=glm(y~x, data=data, family=binomial(link="logit"))
newdata=data.frame(x=6, y=100)
bb=predict(aa, newdata=newdata,
2006 Jul 21
0
[Fwd: Re: Parameterization puzzle]
Bother! This cold has made me accident-prone. I meant to hit Reply-all.
Clarification below.
-------- Original Message --------
Subject: Re: [R] Parameterization puzzle
Date: Fri, 21 Jul 2006 19:10:03 +1200
From: Murray Jorgensen <maj at waikato.ac.nz>
To: Prof Brian Ripley <ripley at stats.ox.ac.uk>
References: <44C063E5.3020703 at waikato.ac.nz>
2008 Jan 04
1
GLMMs fitted with lmer (R) & glimmix (SAS)
I'm fitting generalized linear mixed models to using several fixed effects (main effects and a couple of interactions) and a grouping factor (site) to explain the variation in a dichotomous response variable (family=binomial). I wanted to compare the output I obtained using PROC GLIMMIX in SAS with that obtained using lmer in R (version 2.6.1 in Windows). When using lmer I'm specifying
2010 Aug 10
1
one (small) sample wilcox.test confidence intervals
Dear R people,
I notice that the confidence intervals of a very small sample (e.g. n=6) derived from the one-sample wilcox.test are just the maximum and minimum values of the sample. This only occurs when the required confidence level is higher than 0.93. Example:
> sample <- c(1.22, 0.89, 1.14, 0.98, 1.37, 1.06)
> summary(sample)
Min. 1st Qu. Median Mean 3rd Qu. Max.
2011 Jan 05
0
Fwd: Re: Simulation - Natrual Selection
-------- Original Message --------
Subject: Re: [R] Simulation - Natrual Selection
Date: Wed, 05 Jan 2011 17:24:05 +0000
From: Ben Ward <benjamin.ward@bathspa.org>
To: Bert Gunter <gunter.berton@gene.com>
CC: Mike Marchywka <marchywka@hotmail.com>
On 05/01/2011 17:08, Bert Gunter wrote:
> Couple of brief comments inline below. -- Bert
>
> On Wed, Jan 5, 2011 at
2011 Aug 31
3
Fitting my data to a Weibull model
Hi guys,
I have a data-set that fits well into a Weibull model y = a-b*exp(-c*x^d).
I want to estimate the parameters of the coefficients a, b, c and d, given x
and y.
Can you guys help me?
Just as an example, I fit the data
y <- c(1,2,3,4,10,20)
and
x <- c(1,7,14,25,29,30)
According to this model above, and using the software CurveExpert, and I got
the estimates of a (2.95),b (2.90),c
2011 Jan 05
2
Simulation - Natrual Selection
Hi,
I've been modelling some data over the past few days, of my work,
repeatedly challenging microbes to a certain concentration of cleaner,
until the required concentration to inhibit or kill them increaces, at
which point they are challenged to a slightly higher concentration each
day. I'm doing ths for two different cleaners and I'm collecting the
required concentration to
2011 Nov 19
3
Data analysis: normal approximation for binomial
Dear R experts,
I am trying to analyze data from an article, the data looks like this
Patient Age Sex Aura preCSM preFreq preIntensity postFreq postIntensity
postOutcome
1 47 F A 4 6 9 2 8 SD
2 40 F A/N 5 8 9 0 0 E
3 49 M N 5 8 9 2 6 SD
4 40 F A 5 3 10 0 0 E
5 42 F N 5 4 9 0 0 E
6 35 F N 5 8 9 12 7 NR
7 38 F A 5 NA 10 2 9 SD
8 44 M A 4 4 10 0 0 E
9 47 M A 4 5 8 2 7 SD
10 53 F A 5 3 10 0 0 E
11
2003 Apr 17
18
Validation of R
Hi All
I am really very interested in starting to use R within our company. I
particularly like the open source nature of the product. My company is a
medical research company which is part of the University of London.
We conduct contract virology research for large pharma companies. My
question is how do we validate this software? I wonder if anyone else
has had the problem and might be able to