similar to: Tabulating Baseline Characteristics on specific observations

Displaying 20 results from an estimated 3000 matches similar to: "Tabulating Baseline Characteristics on specific observations"

2012 Apr 27
2
Deleting observations from baseline that don't appear in follow up
Hello all, I'm almost embarrassed to post this , it seems so easy. Suppose I have a baseline and follow up survey but some people are missing in the follow up: > baseline<-data.frame(id=c(3,5,7,9,12), data= runif(5)) > follow.up<-data.frame(id=c(3,7,9,12), data= runif(4)) > baseline id data 1 3 0.66771988 2 5 0.28794744 3 7 0.01892821 4 9 0.64863175 5 12 0.86485882
2011 Oct 05
1
calling a variable which in turn calls many more variables
Hi all, I am running regressions with many covariates, most of which remain the same each time (control variables). Instead of writing 30 demographic variables every regression, is there a way I could call them all at once using a variable called, perhaps "demog"? I have tried: > demog <- list(age1, age2, age3) but I get an error when I try to call a list in a regression. I also
2002 Mar 04
1
Wine --managed
I'm running wine, and quite oftem, I use good ol' PROGMAN.EXE as an access point to my win-apps. I launch Progman with the --managed option, but any applications I run from within Progman, are not run with this option. This is also a problem with program such as Aol Instant Messenger, when it creates its own sub windows and what not. Is there any place where I can configure how wine apps
2009 Jul 25
1
A Harder Score Test Question
Does anyone know how get the score and information under the null from coxph? I know that I can get the chi-square value of the score from coxph, but I need the two components separately. I have a function that computes the two components when I do not have ties but I would like to leverage the options(ties and strata components) already available in the coxph function. The function coxph.detail
2006 Oct 19
1
unique sets of factors
All: I have a matrix, X, with a LARGE number of rows. Consider the following three rows of that matrix: 1 1 1 1 2 2 3 3 1 1 1 1 3 3 2 2 3 3 2 2 1 1 1 1 I wish to fit many one-way ANOVAs to some response variable using each row as a set of factors. For example, for each row above I will do something like anova(lm(Y~as.factor(X[1,]))). My problem is that in the above example, I do not want
2003 Jul 24
1
scatterplot smoothing using gam
All: I am trying to use gam in a scatterplot smoothing problem. The data being smoothed have greater 1000 observation and have multiple "humps". I can smooth the data fine using a function something like: out <- ksmooth(x,y,"normal",bandwidth=0.25) plot(x,out$y,type="l") The problem is when I try to fit the same data using gam out <-
2010 Mar 29
0
Question on entry exit tabulating in any R finance package
I asked the question in Rmetrics subforum, but for some reason, almost two weeks later, it keeps saying I haven't been approved to post yet there. I'll try again here on the open forum. I wanted to have a script that tabulates results by trade. I.e. instead of tabulating each day as a trade event, you enter on one date, exit another, and simply tabulate metrics based on one trade period.
2000 May 02
1
tick marks on mfrow=c(3,3) plot (with simple example)
Sorry: I should have reproduced the "problem" with a simple example. I do this below. I think there is likely a switch I can change using par, but don't know what it is. The problem is the tick marks for the Y- axis are only on plots in column #1 and for the X-axis in row # 2. Tony x <- 1:10 y <- 1:10*5 par(mfrow=c(2,2)) plot(x,y) plot(x,y) plot(x,y) plot(x,y)
2010 Sep 16
1
advice on writing/maintaining an R package with a version control system
Dear all, As I resume my dissertation work next month, I'd like to actually start an R package this time around. I haven't done so because I update my code very often (still in development phase), so running the skeleton function, running checks, building, and re-installing the package onto the system seemed like a long and tedious process. I would like to hear your experience on how
2010 Apr 26
1
failing to select a subset of observations based on variable values [Sec: UNCLASSIFIED]
Greetings all. I'm starting analysis in R on a reasonably sized pre-existing dataset, of 583 variables and 1127 observations. This was an SPSS datafile, which I read in using the read.spss command using the foreign package, and the data was assigned to a data.frame when it was read in. The defaults in read.spss were used, except I set to.data.frame = TRUE. The data is a survey dataset (each
1999 Oct 08
1
error using dyn.load
I am trying to use dynamic loading of an outside C routine. I am attempting 6.12.1 of Phil Spector's book. When I try to load the object file I get an error I don't understand: > dyn.load("runa.o") Error in dyn.load(x) : unable to load shared library "/usr/home/tdlong/run_avg/runa.o": /usr/home/tdlong/run_avg/runa.o: ELF file's phentsize not the expected
2003 Sep 13
4
Large memory issues on 4-STABLE
Hey All I have a dual Xeon box used by our students for data crunching. It has 4GB of RAM. After initial installation everything went well until someone found they couldn;t allocate more than 512MB of RAM per process. After some poking around I found some things things to adjustin the kernel conf file: options MAXDSIZ="(2000*1024*1024)" options
2002 Jan 25
2
selecting clusters of points
All: Are there any functions out there for selecting all the points in a region of a plot. I envision something like the identify() function except one could circle a cloud of points (and perhaps a vector would be returned of the same length as the points plotted indicating logical membership in the circled cloud). Perhaps someone has done something with the locator() function that would
1999 May 15
2
vsize and nsize
I am running R version ??? under Redhat 5.2. It seems as though the --nsize object has no effct on the size of the allocated Ncells as determined using gc(). Yes, I have that much data.... That is if I envoke R with R --vsize 100 --nsize 5000000 then type gc() I get free total Ncells 92202 200000 Vcells 12928414 13107200 Thanks Tony Long Ecology and Evolutionary Biology Steinhaus
2001 Sep 25
3
Error in optim(p, fun,...)
All: I am getting an error code from the optimization function. The code is Error in optim(p,fun.LLike, lower=low, upper = up, method = "L-BFGS-B", : non-finite finite-difference value [0] If I add a trace=6 option to my control list the last message before this error is: At X0, 0 variables are exactly at the bounds Any ideas on where I should start would be
2012 Jul 24
1
temp fix: Simultaneous reads and writes from specific apps to IPoIB volume seem to conflict and kill performance.
The problem described in the subject appears NOT to be the case. It's not that simultaneous reads and writes dramatically decrease perf, but that the type of /writes/ being done by this app (bedtools) kills performance. If this was a self-writ app or an infrequently used one, I wouldn't bother writing this up, but bedtools is a fairly popular genomics app and since many installations use
2011 May 16
2
conditional rowsums in sapply
Hi all I have a data frame with duplicate columns and i want to remove duplicates by adding rows in each group of duplicates, but have lots of NA's. Data: dfrm <- data.frame(a = 1:4, b= 1:4, cc= 1:4, dd=1:10, ee=1:4) names(dfrm) <- c("a", "a", "b", "b", "b") dfrm[3,2:3]<-NA dfrm a a b b b 1 1 1 1 1 1 2 2 2 2 2 2 3
2007 Feb 04
3
Reference to dataframe and contents
This is probably easy for experienced users but I could not find a solution. I have several R scripts that process several columns of a dataframe (several dataframes and columns actually, but simplified for my question). References such as: myDF$myCol are all over. I like to automate this for other dataframes and columns by defining a reference only once in the beginning of the script. One
2011 Jan 31
2
From data frame to list object
Dear all, let say I have following data frame: > data.frame(x=rnorm(18), y=rep(c("a", "b", "c"), each=6)) x y 1 -1.072152537 a 2 0.382985265 a 3 0.058877377 a 4 -0.006911939 a 5 -2.355269051 a 6 -0.303095553 a 7 0.484038422 b 8 0.733928931 b 9 -1.136014346 b 10 0.503552090 b 11 1.708609658 b 12 -0.294599403 b 13
2012 Jul 26
2
kernel parameters for improving gluster writes on millions of small writes (long)
This is a continuation of my previous posts about improving write perf when trapping millions of small writes to a gluster filesystem. I was able to improve write perf by ~30x by running STDOUT thru gzip to consolidate and reduce the output stream. Today, another similar problem, having to do with yet another bioinformatics program (which these days typically handle the 'short reads' that