similar to: simple interactions

Displaying 20 results from an estimated 40000 matches similar to: "simple interactions"

2016 Apr 15
0
simple interactions
Dear Terry, Does fitting group + age:group instead of age*group solves your problem? Best regards, ir. Thierry Onkelinx Instituut voor natuur- en bosonderzoek / Research Institute for Nature and Forest team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance Kliniekstraat 25 1070 Anderlecht Belgium To call in the statistician after the experiment is done may be no more
2013 Apr 24
2
Trouble Computing Type III SS in a Cox Regression
I should hope that there is trouble, since "type III" is an undefined concept for a Cox model. Since SAS Inc fostered the cult of type III they have recently added it as an option for phreg, but I am not able to find any hints in the phreg documentation of what exactly they are doing when you invoke it. If you can unearth this information, then I will be happy to tell you whether
2008 Nov 24
1
Discrepancy in the PBC data set
The data set in R is wrong. I've found mistakes on 2 lines in a quick look. I don't know if the data is incorrect in the Appendix of Fleming and Harrington as well (someone seems to have borrowed my copy), which is where the data set appears to have been taken from, given all the "-9" codes in it. (Note, Tom Fleming originally got the data from me, so I'm fairly
2009 Aug 01
2
Cox ridge regression
Hello, I have questions regarding penalized Cox regression using survival package (functions coxph() and ridge()). I am using R 2.8.0 on Ubuntu Linux and survival package version 2.35-4. Question 1. Consider the following example from help(ridge): > fit1 <- coxph(Surv(futime, fustat) ~ rx + ridge(age, ecog.ps, theta=1), ovarian) As I understand, this builds a model in which `rx' is
2015 Apr 23
3
model frames and update()
This issue has arisen within my anova.coxph routine, but is as easily illustrated with glm. testdata <- data.frame(y= 1:5, n= c(8,10,6,20,14), sex = c(0,1,0,1,1), age = c(30,20,35,25,40)) fit <- glm(cbind(y,n) ~ age + sex, binomial, data=testdata, model=TRUE) saveit <- fit$model update(fit, .~. - age, data=saveit)
2008 Jun 16
1
回复: cch() and coxph() for case-cohort
I tried to compare if cch() and coxph() can generate same result for same case cohort data Use the standard data in cch(): nwtco Since in cch contains the cohort size=4028, while ccoh.data size =1154 after selection, but coxph does not contain info of cohort size=4028. The rough estimate between coxph() and cch() is same, but the lower and upper CI and P-value are a little different. Can we
2018 Jan 09
1
resolving a names conflict
The survival package uses a generalized cholesky decompostition throughout.? If A is a symmetric matrix A= LDL' where L is lower triangular with 1s on the diagonal, D is diagonal, and D[i,i] =0 if column i of A is redundant.? Being able to read the rank and dependencies directly off of D is very handy. The bdsmatrix package uses the same, but exposes it to the user as gchol and solve
2011 May 26
5
Survival: pyears and ratetable: expected events
Dear all, I am having a (really) hard time getting pyears to work together with a ratetable to give me the number of expected events (deaths). I have the following data: dos, date of surgery, as.Date dof, date of last follow-up, as.Date dos, date of surgery, as.Date sex, gender, as.factor (female,male) ev, event(death), 0= censored at time point dof, 1=death at time point dof Could someone
2010 Nov 11
3
Evaluation puzzle
The survexp function can fail when called from another function. The "why" of this has me baffled, however. Here is a simple test case, using a very stripped down version of survexp: survexp.test <- function(formula, data, weights, subset, na.action, rmap, times, cohort=TRUE, conditional=FALSE, ratetable=survexp.us, scale=1, npoints, se.fit,
2008 Mar 12
1
survival analysis and censoring
In your particular case I don't think that censoring is an issue, at least not for the reason that you discuss. The basic censoring assumption in the Cox model is that subjects who are censored have the same future risk as those who were a. not censored and b. have the same covariates. The real problem with informative censoring are the covaraites that are not in the model; ones that
2008 Jun 12
1
cch function and time dependent covariates
----- begin included message In case cohort study, we can fit proportional hazard regression model to case-cohort data. In R, the function is cch() in Survival package Now I am working on case cohort analysis with time dependent covariates using cch() of "Survival" R package. I wonder if cch() provide this utility or not? The cch() manual does not say if time dependent covariate is
2010 Mar 05
2
Defining a method in two packages
The coxme package has a ranef() method, as does lme4. I'm having trouble getting them to play together, as shown below. (The particular model in the example isn't defensible, but uses a standard data set.) The problem is that most of the time only one of lme4 or coxme will be loaded, so each needs to define the basic ranef function as well as a method for it. But when loaded together
2011 Jan 24
1
How to measure/rank ?variable importance when using rpart?
--- included message ---- Thus, my question is: *What common measures exists for ranking/measuring variable importance of participating variables in a CART model? And how can this be computed using R (for example, when using the rpart package)* ---end ---- Consider the following printout from rpart summary(rpart(time ~ age + ph.ecog + pat.karno, data=lung)) Node number 1: 228 observations,
2024 Feb 07
2
Difficult debug
I haven't done any R memory debugging lately, but https://www.mail-archive.com/rcpp-devel at lists.r-forge.r-project.org/msg10289.html shows how I used to have gdb break where valgrind finds a problem so you could examine the details. Also, running your code after running gctorture(TRUE) can help track down memory problems. -Bill On Wed, Feb 7, 2024 at 12:03?PM Therneau, Terry M., Ph.D.
2024 Feb 07
2
Difficult debug
?I've hit a roadblock debugging a new update to the survival package.?? I do debugging in a developement envinment, i.e. I don't create and load a package but rather? source all the .R files and dyn.load an .so file, which makes things a bit easier. ? Running with R -d "valgrind --tool=memcheck --leak-check=full" one of my test files crashes in simple R code a dozen lines
2006 May 12
2
reusing routines
I've created some Splus code for a microarray problem that - needed to be in C, to take advantage of some sparse matrix properties - uses a cholesky decompostion as part of the computation For the cholesky, I used the cholesky2 routine, which is a part of the survival library. It does just what I want and I'm familiar with it (after all, I wrote it). In Splus, this all works
2011 Apr 18
2
help with eval()
I've narrowed my scope problems with predict.coxph further. Here is a condensed example: fcall3 <- as.formula("time ~ age") dfun3 <- function(dcall) { fit <- lm(dcall, data=lung, model=FALSE) model.frame(fit) } dfun3(fcall3) The final call fails: it can't find 'dcall'. The relevant code in model.frame.lm is: env <- environment(formula$terms)
2020 Sep 25
1
Extra "Note" in CRAN submission
When I run R CMD check on the survival package I invariably get a note: ... * checking for file ?survival/DESCRIPTION? ... OK * this is package ?survival? version ?3.2-6? * checking CRAN incoming feasibility ... NOTE Maintainer: ?Terry M Therneau <therneau.terry at mayo.edu>? ... This is sufficient for the auto-check process to return the following failure message: Dear maintainer,
2006 Sep 05
3
terms.inner
Question: I am trying to impliment a function in R that we use quite regularly in Splus, and it fails due to a lack of the "terms.inner" function in R. The substitute is? Part question and part soapbox: Why remove terms.inner from R? It's little used, but rather innocuous. Mostly soapbox: I figured it was no big deal, as I originally discovered the use of terms.inner from
2014 Mar 20
2
The case for freezing CRAN
There is a central assertion to this argument that I don't follow: > At the end of the day most published results obtained with R just won't be reproducible. This is a very strong assertion. What is the evidence for it? I write a lot of Sweave/knitr in house as a way of documenting complex analyses, and a glm() based logistic regression looks the same yesterday as it will