similar to: manipulating multiply imputed data sets

Displaying 20 results from an estimated 200 matches similar to: "manipulating multiply imputed data sets"

2008 May 30
0
imputationlist, update, and recode
I'm stumbling my way through manipulating data in multiply imputed datasets, and have run into a problem translating code I used to run on my pre-imputed dataset to multiple datasets. The imputation runs just fine, as does the reading of the mi data sets into an imputationList. I run into trouble, though, when I try to construct a scale across all the data sets. Is there a simple way to do
2009 Oct 17
2
repeating values in levels()
Can someone help me understand this results? > levels(as.factor(miset1$facts_convict)) [1] "1" "1" "2" "3" "4" "5" "6" converting to numeric and back doesn't seem to help: > levels(as.factor(as.numeric(miset1$facts_convict))) [1] "1" "1" "2" "3" "4" "5"
2007 Jun 07
1
MITOOLS: Error in eval(expr, envir, enclos) : invalid 'envir' argument
R-users & helpers: I am using Amelia, mitools and cmprsk to fit cumulative incidence curves to multiply imputed datasets. The error message that I get "Error in eval(expr, envir, enclos) : invalid 'envir' argument" occurs when I try to fit models to the 50 imputed datasets using the "with.imputationList" function of mitools. The problem seems to occur
2010 Oct 26
3
stripping #s in a text file prior to reading into table or dataframe
I'm importing a lot of text tables of data (from Latent Gold) that includes hashes in some of the column names ("Cluster#1", "Cluster#2", etc.). Is there an easy way to strip the offending hashes out before pushing the text into a table or data frame? I thought I'd use gsub, e.g., but can't figure out how to read in a text file without reading it into a table or
2009 Jul 03
2
mapping states with colors
Hi folks, I'm just learning how to use maps. As an initial foray, I'm mapping the states that have "duty to retreat" (blue) and "stand your ground" (red) self-defense standards. Here is my extremely naive script: dtr <- c('alabama', 'arizona', 'conneticut', 'delaware', 'dist of columbia' , 'hawaii',
2009 Aug 26
2
simple graph question: manipulating variable names
This is a simple problem that has stumped me: I'm trying to loop through a few dozen variable names in graphs. I've tried various approaches like this: attach(mydata) ivs <- c("oneiv", "anotheriv", "yetanotheriv") dvs <- c("onedv", "anotherdv", "yetanotherdv") for (iv in ivs) { for (dv in dvs) { graphname <- paste(iv,
2008 Jun 22
1
two newbie questions
# I've tried to make this easy to paste into R, though it's probably so simple you won't need to. # I have some data (there are many more variables, but this is a reasonable approximation of it) # here's a fabricated data frame that is similar in form to mine: my.df <- data.frame(replicate(10, round(rnorm(100, mean=3.5, sd=1)))) var.list <- c("dv1",
2007 May 31
0
Using MIcombine for coxph fits
R-helpers: I am using R 2.5 on Windows XP, packages all up to date. I have run into an issue with the MIcombine function of the mitools package that I hoped some of you might be able to help with. I will work through a reproducible example to demonstrate the issue. First, make a dataset from the pbc dataset in the survival package --------------- # Make a dataset library(survival) d <-
2007 Mar 02
1
Mitools and lmer
Hey there I am estimating a multilevel model using lmer. I have 5 imputed datasets so I am using mitools to pool the estimates from the 5 > > datasets. Everything seems to work until I try to use > MIcombine to produced pooled estimates. Does anyone have any suggestions? The betas and the standard errors were extracted with no problem so everything seems to work smoothly up until
2007 Aug 14
0
Panel data and imputed datasets
Hi all, I am hardly an expert, so I expect that this code is not the easiest/ most efficient way of getting where I want. Any suggestions in that direction would also be helpful. I am working on panel analysis with five imputed datasets, generated by Amelia. To do panel analysis, it seemed that the plm package was the best, providing a convenient wrapper for fixed and random effects
2003 Jan 21
1
Logistic regression: At times correlation matrix of coefficients gets messed up
Hi, When I include a categorical variable (RACE with 3 levels - "white", "black" and "other") in my logistic regression model, the correlation matrix of the coefficients gets messed up. I get something like: ----------------------------------------- Correlation of Coefficients: ( A L RACEb AGE , 1 LWT , 1 RACEblack 1
2011 Sep 29
0
geeglm estimates and standard deviation are too large
Hi, I'm using geeglm function to account for the repeated measure. fit1<- geeglm( binary.outcome ~ age + race + gender + fever.yes.no, data=mydata, id=ID, family=binomial, corstr="exchangeable") summary(fit1)$coef gives too large estimates and standard deviation: Estimate Std.err Wald Pr(>|W|) (Intercept) 3.07e+16
2010 Apr 07
3
PSTN issues
Hope some can help me. I have a PSTN coming into TDM400 into Asterisk. We also have direct telephones connected to the PSTN bypassing the Asterisk. When a call comes in on the PSTN the direct connected phones ring first and if no one picks up , Asterisk picks and get routed to internal sip phones. I am not able to find what I should tune to make the calls always go through asterisk without the
2010 Mar 24
1
pstn calls not picked up
I have a PSTN line coming into FXO port 4 on a TDM400P. Incoming calls are not being picked up. I don't find anything unusual in asterisk log. I am clueless where I should look. I also find zapata-additional.conf empty. The trouble started when the system was accidentally shut down and rebooted. Any help ? How do I diagnose if the TDM400P is not fried ? Thanks, -braman
2010 Jul 24
1
latent class analysis with mixed variable types
As an alternative to Latent GOLD, I'm wondering if anyone knows of and R package that can manage Latent Class Analysis with mixed variable types (continuous, ordinal, and nominal/binary). [[alternative HTML version deleted]]
2008 Apr 10
1
Degrees of freedom in binomial glm
Hello, I am looking at the job satisfaction data below, from a problem in Agresti's book, and I am not sure where the degrees of freedom come from. The way I am fitting a binomial model, I have 168 observations, so in my understanding that should also be the number of fitted parameters in the saturated model. Since I have one intercept parameter, I was thinking to get 167 df for the Null
2008 Jul 15
2
sem & testing multiple hypotheses with BIC
I'm coming from the AMOS world and am wondering if there is a simple way to do multiple hypothesis testing in the manner of BIC analyses in AMOS using the sem package in R. I've read the documentation, but don't see anything in there except for basic BIC scores. Perhaps someone has devised a simple way to compare the relative likelihood of all possible path-fittings within a
2009 Apr 07
2
newbie query: simple crosstabs
I've been playing around with various table tools, trying to construct a fairly simple cross-tab. It shouldn't be hard, but for some reason it turning out to be (for me). If I want to see how many men and how many women agree with a agree/disagree question (coded 1,0), I can do this: >attach(mydata) >mytable <- table(male, q1.bin) # gender and a binary response variable
2011 Dec 10
2
p-value for hazard ratio in Cox proportional hazards regression?
Hi, I'm new to R and using it for Cox survival analysis. Thanks to this great forum I learned how to compute the HR with its confidence interval. My question would be: Is there any way to get the p-value for a hazard ratio in addition to the confidence interval? Thanks, Thierry -- Thierry Panje Visiting Student Researcher Department of Psychology Stanford
2008 May 19
2
recoding data with loops
# I'm new to R and am trying to get the hang of how it handles # dataframes & loops. If anyone can help me with some simple tasks, # I'd be much obliged. # First, i'd like to generate some random data in a dataframe # to efficiently illustrate what I'm up to. # let's say I have six variables as listed below (I really # have hundreds, but a few will illustrate the point). #