similar to: sub() and gsub() (PR#1826)

Displaying 20 results from an estimated 10000 matches similar to: "sub() and gsub() (PR#1826)"

2003 Jan 24
2
hist() with option "sub" (PR#2492)
Full_Name: Jerome Asselin Version: 1.6.2 OS: redhat linux 7.2 Submission from: (NULL) (142.103.173.179) This is certainly not a big problem, but should there really be a warning message when I run this? > x <- c(1,1,2,2,2,2,3,3) > hist(x,sub="Sub Title") Warning messages: 1: parameter "sub" couldn't be set in high-level plot() function 2: parameter
2003 Feb 24
2
"trace" argument in legend() (PR#2578)
Full_Name: Jerome Asselin Version: 1.6.2 OS: RedHat Linux 7.2 Submission from: (NULL) (142.103.173.179) Should be an easy fix... Consider the examble below: plot(0,0) legend(0,0,c("Hello!","Hi!"),pch=1:2,lty=1:2,trace=T) It gives the following trace: > plot(0,0) > legend(0,0,c("Hello!","Hi!"),pch=1:2,lty=1:2,trace=T) xchar= 0.05178 ;
2002 Nov 22
1
Segmentation fault using "survival" package (PR#2320)
Full_Name: Jerome Asselin Version: 1.6.1 OS: RedHat Linux 7.2 Submission from: (NULL) (142.103.173.179) Hello, I get a segmentation fault when I run the following code. I wouldn't expect meaningful results because my response variable contains only missing values. However, I would expect something like a regular error (not a segmentation fault). library(survival) data <-
2003 Mar 12
1
plot() with type="s" and lty=2 (PR#2630)
Full_Name: Jerome Asselin Version: 1.6.2 OS: RedHat Linux 7.2 Submission from: (NULL) (142.103.173.179) In the following example, the line type lty=2 does not show properly across the entire line. x <- c(seq(0,.5,.001),seq(.6,1,.1)) y <- rep(1,length(x)) plot(x,y,type="s",lty=2) Sincerely, Jerome Asselin
2003 Jan 24
1
table() with option "exclude=NULL" (PR#2491)
Full_Name: Jerome Asselin Version: 1.6.2 OS: redhat linux 7.2 Submission from: (NULL) (142.103.173.179) Bug or feature? Hard to say... But it sure would be nice if table() would label the frequency of NA's as it does for NaN's. Regards, Jerome > table(c(2,NA,1,1,1),exclude=NULL) 1 2 3 1 1 > table(c(2,NA,1,1,1,NaN),exclude=NULL) 1 2 NaN 3 1 1 1 #For sake of
2003 Jul 15
2
"na.action" parameter in princomp() (PR#3481)
Full_Name: Jerome Asselin Version: 1.7.1 OS: Red Hat Linux 7.2 Submission from: (NULL) (24.77.125.119) Setting the parameter na.action=na.omit should remove incomplete records in princomp. However this does not seem to work as expected. See example below. Sincerely, Jerome Asselin data(USArrests) princomp(USArrests, cor = TRUE) #THIS WORKS USArrests[1,3] <- NA princomp(USArrests, cor =
2003 Oct 24
2
Segmentation fault in .Call() (PR#4761)
Full_Name: Jerome Asselin Version: 1.8.0 OS: RedHat Linux 7.2 Submission from: (NULL) (142.103.177.13) I would not expect a segmentation fault; perhaps an error message. > .Call("log") Segmentation fault This is always reproducable for me. Sincerely, Jerome Asselin
2003 Jun 05
2
Fwd: Re: legend() with option adj=1
Is there a simpler way then the solution to the one that was posted here? I'm not very proficient with legend, and I don't understand this solution. All I have is two or more lines on one plot that I want to put a legend on and I can't figure out how to do it from the examples. Can you give a very simple example? It does not have to be fancy!! I have never worked with a
2003 Aug 07
2
segmentation fault: formula() with long variable names (PR#3680)
R version: 1.7.1 OS: Red Hat Linux 7.2 In this example, I would expect an error for the overly long variable name. This is always reproducable for me. > formula(paste("y~",paste(rep("x",50000),collapse=""))) Segmentation fault Sincerely, Jerome Asselin -- Jerome Asselin (Jérôme), Statistical Analyst British Columbia Centre for Excellence in HIV/AIDS St.
2004 Jan 15
1
random effects with lme() -- comparison with lm()
Hi all, In the (very simple) example below, I have defined a random effect for the residuals in lme(). So the model is equivalent to a FIXED effect model. Could someone explain to me why lme() still gives two standard deviation estimates? I would expect lme() to return either: a) an error or a warning for having an unidentifiable model; b) only one standard deviation estimate. Thank you for your
2003 Feb 15
2
(no subject)
Hi, Are there some packages which can generate multi-normal, multi-t, etc multivariate sampling? thanks! Best wishes, Peng ******************************* Peng Zhang Department of Biostatistics Harvard School of Public Health 655 Huntington Avenue Boston, Massachusetts 02115 ******************************* I believe I can fly I believe I can touch the sky
2003 Feb 19
1
How to use Cox PH model to select genes from DNA gene expression profiles?
I'm doing prediction of the survival cases using gene expression profiles(Affymetrix chips). Can somebody tell me how to use the Cox PH model to select genes and make a prediction of survival? Thanks. Guangchun
2003 Feb 27
2
epoch time conversion in R
I have a data file where each entry is indexed by the time in seconds since epoch (e.g. 1046315697). Is there an easy way to convert this time value into a more friendly time (such as Month-Year) when plotting it? I searched through the manual, mailing lists, and functions like as.POSIXct and strptime, but didn't find what I need. Thanks, Sharad.
2003 Apr 17
1
bit set or bit test
Hello, does R have functions for setting and testing bit values? I want to conserve memory for storing presence/absence data for large multiple arrays within a single array, using element values like present[x,y] <- ntharray[x,y]*(2^n) where presence is 1, non-presence is 0 and n is the nth array e.g. 1*(2^0) + 0*(2^1) + 0*(2^2) + 1*(2^3) + 0(2^4) for storing the value 9 for presence in
2003 Aug 14
2
How to get the pseudo left inverse of a singular square matrix?
Dear R-listers, I have a dxr matrix Z, where d > r. And the product Z*Z' is a singular square matrix. The problem is how to get the left inverse U of this singular matrix Z*Z', such that U*(Z*Z') = I? Is there any to figure it out using matrix decomposition method? Thanks a lot for your help. Fred
2003 Feb 27
2
interval-censored data in survreg()
I am trying to fit a lognormal distribution on interval-censored data. Some of my intervals have a lower bound of zero. Unfortunately, it seems like survreg() cannot deal with lower bounds of zero, despite the fact that plnorm(0)==0 and pnorm(-Inf)==0 are well defined. Below is a short example to reproduce the problem. Does anyone know why survreg() must behave that way? Is there an alternate
2003 Feb 24
2
fill prob. in legend
Hi, I'm trying to construct a legend which has four lines of text and associated symbols. The first two symbols need to be normal lines which vary only in colour. The second two symbols should have filled boxes. How do I suppress the fill boxes in the first two lines? J.
2003 Jul 31
4
timezones
I have some questions and comments on timezones. Problem 1. # get current time in current time zone > (now <- Sys.time()) [1] "2003-07-29 18:23:58 Eastern Daylight Time" # convert this to GMT > (now.gmt <- as.POSIXlt(now,tz="GMT")) [1] "2003-07-29 22:23:58 GMT" # take difference > now-now.gmt Time difference of -5 hours Note that the difference
2001 Nov 05
1
stepwise algorithm step() on coxph() (PR#1159)
Full_Name: Jerome Asselin Version: 1.3.1 OS: MacOS 9.2 Submission from: (NULL) (142.103.173.46) The step() function attempts to calculate the deviance of fitted models even if does not really need it. As a consequence, the step() function gives an error when it is used with coxph(). (There is currently no method to calculate the deviance of coxph() fits.) The code below gives an example of how
2003 May 21
1
axis() default values for "lty", "lwd", and "col"
Hi, I would like to recommend a minor modification in axis() which I believe can simplify the making of plots for publications. I am trying to define default values for par() in order to make labels bigger and lines thicker, so that the resulting plots look good when resized for publication purposes. I ran into the following problem... axis() does not use par() values as default for