search for: nonsignificant

Displaying 17 results from an estimated 17 matches for "nonsignificant".

Did you mean: insignificant
2007 Apr 09
1
testing differences between slope differences with lme
...del which gives slope and intercept terms for 6 groups (diagnosis (3 levels) by risk group(2 levels)). the fixed part of the model is -- brain volume ~ Diagnosis + Risk Group + (Risk Group * age : Diagnosis) - 1 thus allowing risk group age/slope terms to vary within diagnosis and omitting a nonsignificant diagnosis by risk group intercept (age was centered) interaction. i am interested in whether differences in risk groups' developmental trajectories are different for different diagnoses. the last three (of 10) fixed effect estimates are estimates for the age/slope differences between risk...
2006 Jul 21
1
Parameterization puzzle
...1916 0.17296 3.580 0.000344 *** Age.L:SmokeYes -1.31234 0.49267 -2.664 0.007729 ** Age.Q:SmokeYes 0.39043 0.43008 0.908 0.363976 Age.C:SmokeYes -0.29593 0.33309 -0.888 0.374298 Age^4:SmokeYes -0.03682 0.24432 -0.151 0.880218 inspires me to fit the second model that omits the nonsignificant terms, however this produces the summary Estimate Std. Error z value Pr(>|z|) (Intercept) -5.8368 0.1213 -48.103 < 2e-16 *** poly(age, 2)1 3.9483 0.1755 22.497 < 2e-16 *** poly(age, 2)2 -1.0460 0.1448 -7.223 5.08e-13 **...
2005 Sep 09
2
test for exponential,lognormal and gammadistribution
hello! i don't want to test my sample data for normality, but exponential- lognormal- or gammadistribution. as i've learnt the anderson-darling-test in R is only for normality and i am not supposed to use the kolmogorov-smirnov test of R for parameter estimates from sample data, is that true? can you help me, how to do this anyway! thank you very much! nadja
2004 Jun 30
1
linear models and colinear variables...
Hi! I'm having some issues on both conceptual and technical levels for selecting the right combination of variables for this model I'm working on. The basic, all inclusive form looks like lm(mic ~ B * D * S * U * V * ICU) Where mic, U, V, and ICU are numeric values and B D and S are factors with about 16, 16 and 2 levels respectively. In short, there's a ton of actual explanatory
2004 Nov 03
0
Johnson-Neyman-procedure in R
Hello, I was wondering if anyone could please help me with some simple questions regarding ANCOVA and the assumption of homogeneity of slopes. The standard design of ANCOVA assumes the homogeneity of regression coefficients of the different groups. This assumption can be tested using the factor ?? covariate interaction, which should subsequently be removed. However if this assumption is not
2008 Aug 25
0
lme: Testing variance components
...good arguments that it is perhaps ill-advised to test (variance is something to be predicted; if there is literally no variance, then the estimate perfectly predicts the outcome and the scientific question is basically answered; no statistics necessary), but there are some circumstances in which a nonsignificant variance component might be useful: For example, when deciding (in an HLM) whether it is necessary or at all useful to include a grouping factor. Thanks in advance. Adam D. I. Kramer Ph.D. Candidate, Social and Personality Psychology University of Oregon
2009 Nov 09
1
Models
Hi all, I hope that there might be some statistician out there to help me for a possible explanation for the following simple question. Y1~ lm(y~ t1 + t2 + t3 + t4 + t5,data=temp) # oridnary linear model library(gam) Y2~ gam(y~ lo(t1) +lo(t2) +lo(t3) +lo(t4) +lo(t5),data=temp) # additive model In the first model t1, t2 and t3 found to be significant,. However, in the second model (using
2009 Dec 17
0
nonlinear (especially logistic) regression accounting for spatially correlated errors
...lf. Therefore, it seems that the effect of the explanatory variable is diluted by this approach. For instance, if you had a 'true' model where temperature was only a function of elevation but elevation was strongly autocorrelated, the approach in the link would likely leave elevation as a nonsignificant part of the model. Versus, if the correlation structure was assigned to model error this would not happen. Is this true or am I speaking of 6 of one and half dozen of the other (that in practice it makes no difference to results)? If the above example is not an example of modeling the correlatio...
2012 Feb 29
2
puzzling results from logistic regression
Hi all, As you can see from below, the result is strange... I would imagined that the bb result should be much higher and close to 1, any way to improve the fit? Any other classification methods? Thank you! data=data.frame(y=rep(c(0, 1), times=100), x=1:200) aa=glm(y~x, data=data, family=binomial(link="logit")) newdata=data.frame(x=6, y=100) bb=predict(aa, newdata=newdata,
2006 Jul 21
0
[Fwd: Re: Parameterization puzzle]
...mokeYes -1.31234 0.49267 -2.664 0.007729 ** >> Age.Q:SmokeYes 0.39043 0.43008 0.908 0.363976 >> Age.C:SmokeYes -0.29593 0.33309 -0.888 0.374298 >> Age^4:SmokeYes -0.03682 0.24432 -0.151 0.880218 >> >> inspires me to fit the second model that omits the nonsignificant terms, >> however this produces the summary >> >> Estimate Std. Error z value Pr(>|z|) >> (Intercept) -5.8368 0.1213 -48.103 < 2e-16 *** >> poly(age, 2)1 3.9483 0.1755 22.497 < 2e-16 *** >> poly(age...
2008 Jan 04
1
GLMMs fitted with lmer (R) & glimmix (SAS)
...value for each of the categories, whereas R gives a single estimate for the interaction term. But, from the main effects it is possible to see very similar estimates obtained with either program. I am very interested in the interaction term SEX*ELI, and this term comes up as significant in SAS and nonsignificant in R. Why could this be? It is very worrisome to think of reporting a significant result that is not validated when doing a similar analysis using a different program! Can somebody help me interpret these differences? Bellow is a summary of the outputs obtained with R and SAS. Thanks, Andrea Previ...
2010 Aug 10
1
one (small) sample wilcox.test confidence intervals
Dear R people, I notice that the confidence intervals of a very small sample (e.g. n=6) derived from the one-sample wilcox.test are just the maximum and minimum values of the sample. This only occurs when the required confidence level is higher than 0.93. Example: > sample <- c(1.22, 0.89, 1.14, 0.98, 1.37, 1.06) > summary(sample) Min. 1st Qu. Median Mean 3rd Qu. Max.
2011 Jan 05
0
Fwd: Re: Simulation - Natrual Selection
...eriment. Whilst far from perfect, I've tried to do the best I can to assess rise in resistance, without going into genetics as it's not possible. (Although may be at the next institution I've applied for MSc). With my hypothesis (I mentioned it below), I was of the frame of mind that a nonsignificant p-value on the cleaner variable (for now - experiment is far from over), indicated a lack of evidence for rejecting the null. And so at the minute, it looks like the type of cleaner makes no difference. >>> if you have that then all your other questions are probably easy to >>>...
2011 Aug 31
3
Fitting my data to a Weibull model
Hi guys, I have a data-set that fits well into a Weibull model y = a-b*exp(-c*x^d). I want to estimate the parameters of the coefficients a, b, c and d, given x and y. Can you guys help me? Just as an example, I fit the data y <- c(1,2,3,4,10,20) and x <- c(1,7,14,25,29,30) According to this model above, and using the software CurveExpert, and I got the estimates of a (2.95),b (2.90),c
2011 Jan 05
2
Simulation - Natrual Selection
Hi, I've been modelling some data over the past few days, of my work, repeatedly challenging microbes to a certain concentration of cleaner, until the required concentration to inhibit or kill them increaces, at which point they are challenged to a slightly higher concentration each day. I'm doing ths for two different cleaners and I'm collecting the required concentration to
2011 Nov 19
3
Data analysis: normal approximation for binomial
Dear R experts, I am trying to analyze data from an article, the data looks like this Patient Age Sex Aura preCSM preFreq preIntensity postFreq postIntensity postOutcome 1 47 F A 4 6 9 2 8 SD 2 40 F A/N 5 8 9 0 0 E 3 49 M N 5 8 9 2 6 SD 4 40 F A 5 3 10 0 0 E 5 42 F N 5 4 9 0 0 E 6 35 F N 5 8 9 12 7 NR 7 38 F A 5 NA 10 2 9 SD 8 44 M A 4 4 10 0 0 E 9 47 M A 4 5 8 2 7 SD 10 53 F A 5 3 10 0 0 E 11
2003 Apr 17
18
Validation of R
Hi All I am really very interested in starting to use R within our company. I particularly like the open source nature of the product. My company is a medical research company which is part of the University of London. We conduct contract virology research for large pharma companies. My question is how do we validate this software? I wonder if anyone else has had the problem and might be able to