search for: eiko

Displaying 17 results from an estimated 17 matches for "eiko".

Did you mean: heiko
2012 Oct 07
3
Robust regression for ordered data
I have two regressions to perform - one with a metric DV (-3 to 3), the other with an ordered DV (0,1,2,3). Neither normal distribution not homoscedasticity is given. I have a two questions: (1) Some sources say robust regression take care of both lack of normal distribution and heteroscedasticity, while others say only of normal distribution. What is true? (2) Are there ways of using robust
2012 Jul 05
4
Exclude missing values on only 1 variable
Hello, I have many hundred variables in my longitudinal dataset and lots of missings. In order to plot data I need to remove missings. If I do > data <- na.omit(data) that will reduce my dataset to 2% of its original size ;) So I only need to listwise delete missings on 3 variables (the ones I am plotting). data$variable1 <-na.omit(data$variable1) does not work. Thank you
2012 Apr 15
2
xyplot type="l"
Probably a stupidly simple question, but I wouldn't know how to google it: xyplot(neuro ~ time | UserID, data=data_sub) creates a proper plot. However, if I add type = "l" the lines do not go first through time1, then time2, then time3 etc but in about 50% of all subjects the lines go through points seemingly random (e.g. from 1 to 4 to 2 to 5 to 3). The lines always start at time
2012 Oct 14
2
Poisson Regression: questions about tests of assumptions
I would like to test in R what regression fits my data best. My dependent variable is a count, and has a lot of zeros. And I would need some help to determine what model and family to use (poisson or quasipoisson, or zero-inflated poisson regression), and how to test the assumptions. 1) Poisson Regression: as far as I understand, the strong assumption is that dependent variable mean = variance.
2012 Apr 12
0
Multivariate multilevel mixed effects model: interaction
...ue of 11.107, meaning they are still significant because they are still all above t~2? 2) The strongest effect is on PHQ6, which is significantly higher than the effect from Neuro on PHQ1? 3) The weakest effect is on PHQ9, which is significantly lower than the effect from Neuro on PHQ1? Thank you Eiko [[alternative HTML version deleted]]
2012 Apr 23
1
save model summary
Hello, I'm working with RStudio, which does not display enough lines in the console that I can read the summary of my (due to the covariance-matrix rather long) model. There are no ways around this, so I guess I need to export the summary into a file in order to see it ... I'm new to R, and "R save model summary" in google doesn't help, neither does "help(save)"
2012 Apr 24
1
Number of lines in analysis after removed missings
I have a dataset with plenty of variables and lots of missing data. As far as I understand, R automatically removes subjects with missing values. I'm trying to fit a mixed effects model, adding covariate by covariate. I suspect that my sample gets smaller and smaller each time I add a covariate, because more and more lines get deleted. Is there a way of displaying how many subjects are
2012 Oct 13
1
WLS regression weights
Hello. I'm am trying to follow a recommendation to deal with a dependent variable in a linear regression. I read that, due to the positive trend in my dependent variable residual vs mean function, I should 1) run a linear regression to estimate the standard deviations from this trend, and 2) run a second linear regression and use 1 / variance as weight. These might be terribly stupid
2012 Nov 09
1
Remove missings (quick question)
A colleague wrote the following syntax for me: D = read.csv("x.csv") ## Convert -999 to NA for (k in 1:dim(D)[2]) { I = which(D[,k]==-999) if (length(I) > 0) { D[I,k] = NA } } The dataset has many missing values. I am running several regressions on this dataset, and want to ensure every regression has the same subjects. Thus I want to drop subjects listwise for
2012 Nov 19
1
Error in `[.data.frame`... undefined columns selected
When I run this script on 9 variables, it works without problems. Z <- data[,c("s1_1234_m","s2_1234_m","s3_1234_m","s4_1234_m","s5_1234_m","s6_1234_m","s7_1234_m","s8_1234_m","s9_1234_m" )] However, when I run the script on 9 different variables, it does not work: Z <-
2012 Jun 28
1
Simple mean trajectory (ordinal variable)
Hello. I have 5 measurement points, my dependent variable is ordinal (0 - 3), and I want to visualize my data. I'm pretty new to R. What I want is to find out whether people with different baseline covariates have different trajectories, so I want a plot with the means trajectory of my dependent variable (the individual points do not make a lot of sense in ordinal data) on each measurement
2012 May 06
2
Interaction plot between 2 continuous variables
I have two very strong fixed effects in a LMM (both continuous variables). model <- lmer( y ~ time + x1+x2 + (time|subject)) Once I fit an interaction of these variables, both main effects disappear and I get a strong interaction effect. model <- lmer( y ~ time + x1*x2 + (time|subject)) I would like to plot this effect now, but have not been able to do so, reading through ggplot2 and
2012 Apr 06
2
Multivariate Multilevel Model: is R the right software for this problem
Hello, I've been trying to answer a problem I have had for some months now and came across multivariate multilevel modeling. I know MPLUS and SPSS quite well but these programs could not solve this specific difficulty. My problem: 9 correlated dependent variables (medical symptoms; categorical, 0-3), 5 measurement points, 10 time-varying covariates (life events; dichotomous, 0-1), N ~ 900.
2012 Oct 22
1
glm.nb - theta, dispersion, and errors
I am running 9 negative binomial regressions with count data. The nine models use 9 different dependent variables - items of a clinical screening instrument - and use the same set of 5 predictors. Goal is to find out whether these predictors have differential effects on the items. Due to various reasons, one being that I want to avoid overfitting models, I need to employ identical types of
2012 Apr 26
0
Correlated random effects: comparison unconditional vs. conditional GLMMs
In a GLMM, one compares the conditional model including covariates with the unconditional model to see whether the conditional model fits the data better. (1) For my unconditional model, a different random effects term fits better (independent random effects) than for my conditional model (correlated random effects). Is this very uncommon, and how can this be explained? Can I compare these models
2012 Oct 21
0
R^2 in Poisson via pr2() function: skeptical about r^2 results
Hello. I am running 9 poisson regressions with 5 predictors each, using glm with family=gaussian. Gaussian distribution fits better than linear regression on fit indices, and also for theoretical reasons (e.g. the dependent variables are counts, and the distribution is highly positively skewed). I want to determine pseudo R^2 now. However, using the pR2() of the pscl package offers drastically
2012 Oct 23
1
Testing proportional odds assumption in R
I want to test whether the proportional odds assumption for an ordered regression is met. The UCLA website points out that there is no mathematical way to test the proportional odds assumption (http://www.ats.ucla.edu/stat//R/dae/ologit.htm), and use graphical inspection ("We were unable to locate a facility in R to perform any of the tests commonly used to test the parallel slopes