similar to: Adjusted survival curves

Displaying 20 results from an estimated 1000 matches similar to: "Adjusted survival curves"

2017 Oct 07
2
Adjusted survival curves
For adjusted survival curves I took the sample code from here: https://rpubs.com/daspringate/survival and adapted for my date, but ... have a QUESTION. library(survival) library(survminer) df<-read.csv("base.csv", header = TRUE, sep = ";") head(df) ID start stop censor sex age stage treatment 1 1 0 66 0 2 1 3 1 2 2 0 18 0 1 2 4 2 3 3 0 43 1 2 3 3 1 4 4 0 47 1 2 3 NA 2 5 5
2017 Oct 09
0
Adjusted survival curves
Adjusted survival curves (Thanks to sample code: https://rpubs.com/daspringate/survival ) Thanks to Moderator/Admin's Great Work! For a successful solution I used advice that could be understood: 1. Peter Dalgaard: The code does not work, because the covariates are not factors. 2. Jeff Newmiller: "Change the columns into factors before you give them to the coxph function, e.g.
2017 Oct 09
0
Adjusted survival curves
Adjusted survival curves. (Sample code here: https://rpubs.com/daspringate/survival ) Deep gratitude?to Moderator/Admin! At?David Winsemius prompt, more elegant working code:Thanks, Ted :) library(survival) library(survminer) df<-read.csv("F:/R/data/edgr-orig.csv", header = TRUE, sep = ";") df2 <- df df2[,c('treatment', 'age', 'sex',
2018 Feb 14
2
Fleming-Harrington weighted log rank test
Hi all,? The survdiff() from survival package has an argument "rho" that implements Fleming-Harrington weighted long rank test.? But according to several sources including "survminer" package (https://cran.r-project.org/web/packages/survminer/vignettes/Specifiying_weights_in_log-rank_comparisons.html), Fleming-Harrington weighted log-rank test should have 2 parameters
2004 Apr 21
1
difference between coxph and cph
Hi. I am using Windows version of R 1.8.1. Being somewhat new to survival analysis, I am trying to compare cph (Design) with coxph (survival) for use with a survival data set. I was wondering why cph and coxph provide me with different confidence intervals for the hazard ratios for one of the variables. I was wondering if I am doing something wrong? Or if the two functions are calculating hazard
2018 Feb 15
0
Fleming-Harrington weighted log rank test
> On Feb 13, 2018, at 4:02 PM, array chip via R-help <r-help at r-project.org> wrote: > > Hi all, > > The survdiff() from survival package has an argument "rho" that implements Fleming-Harrington weighted long rank test. > > But according to several sources including "survminer" package
2018 Feb 15
1
Fleming-Harrington weighted log rank test
> On Feb 14, 2018, at 5:26 PM, David Winsemius <dwinsemius at comcast.net> wrote: > >> >> On Feb 13, 2018, at 4:02 PM, array chip via R-help <r-help at r-project.org> wrote: >> >> Hi all, >> >> The survdiff() from survival package has an argument "rho" that implements Fleming-Harrington weighted long rank test. >>
2024 May 15
2
Extracting values from Surv function in survival package
OS X R 4.3.3 Colleagues I have created objects using the Surv function in the survival package: > FIT.1 Call: survfit(formula = FORMULA1) n events median 0.95LCL 0.95UCL SUBDATA$ARM=1, SUBDATA[, EXP.STRAT]=0 18 13 345 156 NA SUBDATA$ARM=2, SUBDATA[, EXP.STRAT]=1 13 5 NA 186 NA SUBDATA$ARM=2, SUBDATA[, EXP.STRAT]=2 5
2007 Dec 31
3
Survival analysis with no events in one treatment group
I'm trying to fit a Cox proportional hazards model to some hospital admission data. About 25% of the patients have had at least one admission, and of these, 40% have had two admissions within the 12 month period of the study. Each patients has had one of 4 treatments, and one of the treatment groups has had no admissions for the period. I used:
2011 Sep 26
3
survival analysis: interval censored data
hello: my data looks like: time1  time2   event  catagoria 2004    2006        1            C 2004    2005        0            C 2005    2010        1            E 2007    2009        1            C 2006    2007        0            E 2008    2010        0            C 2008    2010        1            E ... and the census interval is 1 year I have tried  this
2010 Nov 15
3
merge two dataset and replace missing by 0
Hi r users, I have two data sets (X1, X2). For example, time1<-c( 0, 8, 15, 22, 43, 64, 85, 106, 127, 148, 169, 190 ,211 ) outpue1<-c(171 ,164 ,150 ,141 ,109 , 73 , 47 ,26 ,15 ,12 ,6 ,2 ,1 ) X1<-cbind(time1,outpue1) time2<-c( 0 ,8 ,15 , 22 ,43 , 64 ,85 ,106 ,148) output2<-c( 5 ,5 ,4 ,5 ,5 ,4 ,1 ,2 , 1 ) X2<-cbind(time2,output2) I want to
2007 Jan 19
4
Newbie question: Statistical functions (e.g., mean, sd) in a "transform" statement?
Greetings listeRs - Given a data frame such as times time1 time2 time3 time4 1 70.408543 48.92378 7.399605 95.93050 2 17.231940 27.48530 82.962916 10.20619 3 20.279220 10.33575 66.209290 30.71846 4 NA 53.31993 12.398237 35.65782 5 9.295965 NA 48.929201 NA 6 63.966518 42.16304 1.777342 NA one can use "transform" to
2004 Jan 07
2
Survival, Kaplan-Meier, left truncation
Dear all, I have data from 1970 to 1990 for people above age 50. Now I want to calculate survival curves by age starting at age 50 using the Kaplan Meier Estimator. The problem I have is that there are already people in 1970 who are older than 50 years. I guess this is called delayed entry or left truncation (?). I thought the code would be: roland <- survfit(Surv(time=age.enter,
2008 Apr 09
2
fuzzy merge
Hi, I would like to merge two data frames. It is just that I want the merging to be done with some kind of a fuzzy criterion. Let me explain. My first data frame looks like this : ID1 time1 dt 1 2008-01-02 13:11 10 2 2008-01-02 14:20 20 3
2009 Jun 29
2
Add ID numbers on a plot
Dear List, I have (for example) 50 observations collected from 50 experimental sites and want to look at changes of 50 observations as function of time in a graph. I found that I could do that using R-code below: time2 <- 1:25 y1=rnorm(25, mean=0, sd=1) y2=rnorm(25, mean=0, sd=1) ... y50=rnorm(25, mean=2, sd=1) plot(time2, y1, type='b', xlim=range(0,30), ylim=range(y1, y2),
2014 Aug 25
5
problema con campos que tienen formato fecha
Hola Javier, Muchas gracias por responder tan rápido! Yo trabajo en Mac OS X 10.9.4. Versión 0.98.953 de RStudio. Versión 3.0.2 (2013-09-25) de R. ## Este es el script que estoy trabajando. Se trata de una rutina para automatizar el cálculo de la duración del evento. setwd("/Users/angelacamargosanabria/Documents/ANGELITA/1-DOC/1-TESIS/4-PAPERS/1-Mamiferos/DATOS/Bases") BASE <-
2010 Sep 02
2
date
Hello all, I've 2 strings that representing the start and end values of a date and time. For example, time1 <- c("21/04/2005","23/05/2005","11/04/2005") time2 <- c("15/07/2009", "03/06/2008", "15/10/2005") as.difftime(time1,time2) Time differences in secs [1] NA NA NA attr(,"tzone") [1] "" How can i
2010 Jan 16
2
Extracing only Unique Rows based on only 1 Column
To Whomever is Interested, I have spent several days searching the web, help files, the R wiki and the archives of this mailing list for a solution to this problem, but nonetheless I apologize in advance if I have missed something obvious. The problem is this; I have a 5-column data frame with about 4.2 million rows, and want to create a new (and hopefully much smaller) data frame that
2014 Oct 16
2
Heatmap de paro (o de otra cosa) en España
Hola Pedro. El INE cambió los ficheros de microdatos no hace mucho, aquí dejo como se haría ahora, (utilizando MicroDatosEs). Lo que cambia es la función para recodificar. http://rpubs.com/joscani/unemplrate El 15/10/14 a las #4, Carlos Ortega escribió: > Hola Pedro, > > Acabo de recordar que hace poco José Luis Cañadas (participa en esta lista) > publicó un enlace suyo a un
2015 Jun 23
3
Plans to improve reference classes?
Could of requests: 1) Is there any example or writeup on the difficulties of extending reference classes across packages? Just so I can fully understand the issues. 2) In what sorts of situations does the performance of reference classes cause problems? Sure, it's an order of magnitude slower than constructing a simple environment, but those timings are in microseconds, so one would need a