search for: statug

Displaying 10 results from an estimated 10 matches for "statug".

Did you mean: status
2010 Jul 17
0
Adjustment for multiple-comparison for log-rank test
DeaR experts, I was asked for a log-rank pairwise survival comparison. I've a straightforward way to do this using the SAS system: http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#/documentation/cdl/en/statug/63033/HTML/default/statug_lifetest_sect019.htm What I've found in R is shown below, but it's not a logrank test, I suppose. (The documentation talks about "Tukey pairwise-comparisons"). Is it possible to carry out a &qu...
2012 Apr 07
1
quadratic model with plateau
Dear All, I would like to make a quadratic with a plateau model in R. Is there a package in R doing this? The bentcableAR package seems won't work. The link below describes what I am looking for in R exactly: http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_nlin_sect033.htm -- Thanks so much! Orange help.ly2005@gmail.com [[alternative HTML version deleted]]
2013 Apr 30
1
Mixed Modeling in lme4
Hi All, I am trying to shift from running mixed models in SAS using PROC MIXED to using lme4 package in R. In trying to match the coefficients of R output to that of SAS output, I came across this problem. The dataset I am using is this one: http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_mixed_sect034.htm If I run the following code: proc mixed data=rc method=ML covtest; class Batch; model Y = Month / s; random Int Month / type=cs sub=Batch s; run; The Fixed effect coefficients match with that of R. But the random effect does not. Here is the...
2013 Jan 24
4
Difference between R and SAS in Corcordance index in ordinal logistic regression
lrm does some binning to make the calculations faster. The exact calculation is obtained by running f <- lrm(...) rcorr.cens(predict(f), DA), which results in: C Index Dxy S.D. n missing 0.96814404 0.93628809 0.03808336 32.00000000 0.00000000 uncensored Relevant Pairs Concordant Uncertain 32.00000000
2005 May 18
4
standardization
SAS Enterprise Miner recommendeds to standardize using X / STDEV(X) versus [X ? mean(X)] / STDEV(X) Any thoughts on this? Pros Cons Philip
2009 Jan 19
1
candisc
Hello, I have a question regarding the candisc package. My data are: species three five 1 2.95 6.63 1 2.53 7.79 1 3.57 5.65 1 3.16 5.47 2 2.58 4.46 2 2.16 6.22 2 3.27 3.52 I put these in a table and then a linear model >newdata <- lm(cbind(three, five) ~ species, data=rawdata) and then do a candisc on them >candata<-candisc(newdata)
2009 Sep 08
1
Confident interval for nls predictions
Hello all, I'm trying to establish some confidence intervals on predictions I am making using >predict(nls(...)) and predict.nls (unfortunately) does not utilize the se.fit option. A little more background is that I am trying to match the output with older SAS routines to maintain consistency. Because predict.nls does not provide se's for individual predictions, I have been using a
2011 Apr 09
2
Orthoblique rotation on eigenvectors (SAS VARCLUS)
Hi All, I'd like to build a package for the community that replicates the output produced by SAS "proc varclus". According to the SAS documentation, the first few steps are: 1. Find the first two principal components. 2. Perform an orthoblique rotation (quartimax rotation) on eigenvectors. 3. Assign each variable to the rotated component with which it has the higher squared
2012 Apr 13
3
Kaplan Meier analysis: 95% CI wider in R than in SAS
Hello All, ? Am replicating in R an analysis I did earlier using SAS. See this as a test of whether I'm ready to start using R in my day-to-day work. ? Just finished replicating a Kaplan Meier analysis. Everything seems to work out fine except for one thing. The 95% CI around my estimate for the median is substantially larger in R than in SAS. For example, in SAS I have a median of 3.29 with a
2006 Apr 06
5
pros and cons of "robust regression"? (i.e. rlm vs lm)
Can anyone comment or point me to a discussion of the pros and cons of robust regressions, vs. a more "manual" approach to trimming outliers and/or "normalizing" data used in regression analysis?