similar to: Joint confidence interval for fractional polynomial terms

Displaying 20 results from an estimated 1000 matches similar to: "Joint confidence interval for fractional polynomial terms"

2005 Sep 02
1
C-index : typical values
I am doing some coxPH model fitting and would like to have some idea about how good the fits are. Someone suggested to use Frank Harrell's C-index measure. As I understand it, a C-index > 0.5 indicates a useful model. I am probably making an error here because I am getting values less than 0.5 on real datasets. Can someone tell me where I am going wrong please ? Here is an example using
2007 Sep 25
7
Who uses R?
Dear R users, I have started work in a Statistics government department and I am trying to convince my bosses to install R on our computers (I can't do proper stats in Excel!!). They asked me to prove that this is a widely used software (and not just another free-source, bug infected toy I found on the web!) by suggesting other big organisations that use it. Are you aware of any reputable
2010 Apr 14
5
Running cumulative sums in matrices
Dear R-helpers, I have a huge data-set so need to avoid for loops as much as possible. Can someone think how I can compute the result in the following example (that uses a for-loop) using some version of apply instead (or any other similarly super-efficient function)? example: #Suppose a matrix: m1=cbind(1:5,1:5,1:5) #The aim is to create a new matrix with every column containing the
2006 Aug 07
3
Finding points with equal probability between normal distributions
Dear mailing list, For two normal distributions, e.g: r1 =rnorm(20,5.2,2.1) r2 =rnorm(20,4.2,1.1) plot(density(r2), col="blue") lines(density(r1), col="red") Is there a way in R to compute/estimate the point(s) x where the density of the two distributions cross (ie where x has equal probability of belonging to either of the two distributions)? Many Thanks Eleni
2006 Jul 27
2
memory problems when combining randomForests [Broadcast]
You need to give us more details, like how you call randomForest, versions of the package and R itself, etc. Also, see if this helps you: http://finzi.psych.upenn.edu/R/Rhelp02a/archive/32918.html Andy From: Eleni Rapsomaniki > > Dear all, > > I am trying to train a randomForest using all my control data > (12,000 cases, ~ 20 explanatory variables, 2 classes). > Because
2009 Feb 27
2
Competing risks adjusted for covariates
Dear R-users Has anybody implemented a function/package that will compute an individual's risk of an event in the presence of competing risks, adjusted for the individual's covariates? The only thing that seems to come close is the cuminc function from cmprsk package, but I would like to adjust for more than one covariate (it allows you to stratify by a single grouping vector). Any
2009 Feb 05
4
See source code for survplot function in Design package
Dear R users, I know one way to see the code for a hidden function, say function_x, is using default.function_x (e.g. summary.default). But how can I see the code for imported packages that have no namespace (in this case Design)? Many Thanks Eleni
2009 Oct 13
2
update.formula drop interaction terms
Dear R users, How do I drop multiplication terms from a formula using update? e.g. forml=as.formula("Surv(time, status) ~ x1+x2+A*x3+A*x4+B*x5+strata(sex)") #I would like to drop all instances of variable A (the main effect and its interactions). The following: updated.forml=update(forml, ~ . -A) #gives me this: #Surv(time, status) ~ x1 + x2 + x3 + x4 + B + x5 + strata(sex) + A:x3 +
2006 Jul 26
3
memory problems when combining randomForests
Dear all, I am trying to train a randomForest using all my control data (12,000 cases, ~ 20 explanatory variables, 2 classes). Because of memory constraints, I have split my data into 7 subsets and trained a randomForest for each, hoping that using combine() afterwards would solve the memory issue. Unfortunately, combine() still runs out of memory. Is there anything else I can do? (I am not using
2006 Sep 27
1
Any hot-deck imputation packages?
Hi I found on google that there is an implementation of hot-deck imputation in SAS: http://ideas.repec.org/c/boc/bocode/s366901.html Is there anything similar in R? Many Thanks Eleni Rapsomaniki
2009 Nov 23
1
Calibration score for survival probability
Good afternoon! I need to evaluate the goodness-of-fit (aka calibration) for survival probability estimates from a Cox model. I tried to use 'calibrate' in the Design package but I'm not sure if it should/would produce what I need (ie a chi-sq type statistic with a table of expected vs observed probabilities). Any other functions I should be aware of? Also, has anybody come across
2006 Jul 24
2
RandomForest vs. bayes & svm classification performance
Hi This is a question regarding classification performance using different methods. So far I've tried NaiveBayes (klaR package), svm (e1071) package and randomForest (randomForest). What has puzzled me is that randomForest seems to perform far better (32% classification error) than svm and NaiveBayes, which have similar classification errors (45%, 48% respectively). A similar difference in
2008 Aug 01
2
correlation between rows of data.frame
Dear R users, I need to come up with an efficient method to compute the correlation (or at least, the euclidean distance if that's easier) between specific rows in a data frame (46,232 rows, 29 columns). The pairs of rows between which I want to find the correlation share a common value in one of the columns. So for example, in the following
2006 Sep 25
2
Multiple imputation using mice with "mean"
Hi I am trying to impute missing values for my data.frame. As I intend to use the complete data for prediction I am currently measuring the success of an imputation method by its resulting classification error in my training data. I have tried several approaches to replace missing values: - mean/median substitution - substitution by a value selected from the observed values of a variable - MLE
2009 Feb 02
1
survfit using quantiles to group age
I am using the package Design for survival analysis. I want to plot a simple Kaplan-Meier fit of survival vs. age, with age grouped as quantiles. I can do this: survplot(survfit(Surv(time,status) ~ cut(age,3), data=veteran) but I would like to do something like this: survplot(survfit(Surv(time,status) ~ quantile(age,3), data=veteran) #will not work ideally I would like to superimpose
2013 Mar 21
1
Re-order variables listed in nomogram?
Hi, I am using the function "nomogram" in the rms package for survival analysis. How is the order in which variables determined and how can I change it? I use it with a cph() model. Many Thanks Eleni Rapsomaniki Clinical Epidemiology Group Department of Epidemiology and Public Health University College London
2009 May 20
1
turning off specific types of warnings
Dear R users, I have a long function that among other things uses the "survest" function from the Design package. This function generates the warning: In survest.cph (...) S.E. and confidence intervals are approximate except at predictor means. Use cph(...,x=T,y=T) (and don't use linear.predictors=) for better estimates. I would like to turn this specific warning off, as it
2010 Jul 19
1
divide grid.newpage into two?
Hi, ? Is there some easy way to split the grid.newpage() into two columns? For example, how could I put the two forest plots below (meta1 and meta2) next to each other? library(meta) data(Olkin95) meta1 <- metabin(event.e, n.e, event.c, n.c,data=Olkin95, subset=c(41,47,51,59),sm="RR", meth="I",studlab=author) meta2=meta1 meta2$studlab=rep("",length(meta1$studlab)
2005 Oct 02
2
convering upper triangular matrix into vector
Hi I have two symmetrical distance matrices and want to compute the correlation coefficient between them (after turning them into vectors). Is there a way of selecting only the upper triangular part of each matrix, then convert this into a vector so I can compute the correlation? Many Thanks Eleni Rapsomaniki
2011 Jun 24
1
UnoC function in survAUC for censoring-adjusted C-index
Hello, I am having some trouble with the 'censoring-adjusted C-index' by Uno et al, in the package survAUC. The relevant function is UnoC. The question has to do with what happens when I specify a time point t for the upper limit of the time range under consideration (we want to avoid using the right-end tail of the KM curve). Copying from the example in the help file: TR <-