similar to: creating packages for Mac

Displaying 20 results from an estimated 9000 matches similar to: "creating packages for Mac"

2000 Oct 20
1
Linux -> Win2K file transfer
Just a quick question, in case I'm doing something really boneheaded that could be easily sorted out. I'm attempting to save() datasets on Linux (R 1.1.1) and load() them on Win2K (rw1011, fetched from CRAN today). I get the "restore file corrupted" message every time. I've tried saving with ascii=TRUE and FALSE, and the ASCII versions look OK (it's my impression
2001 Apr 10
0
segfault on Linux from buffer overflow in warning() ? (PR#905)
I have found what seems to be a bug in warning(), but perhaps I'm being really boneheaded (it's happened before). Essentially, warning() seems to segfault if its argument is greater than 8191 characters (8192 is defined as BUFSIZE in errors.c, so a quick workaround would be to boost this ...) The bug was initially provoked by trying to concatenate two long tables -- the warning message
2002 Feb 06
1
1.3.1/1.4.1 Windows binary incompatibilities?
I should probably have been able to figure this out, but ... I have a package with some C code in it that I've been cross-building on my Linux machine to run under Windows. I had it working under 1.3.1, but it seems to have stopped with 1.4.1. Building with a version of i386-mingw32msv-gcc recently downloaded from Brian Ripley's Rtools page (--version 2.95.2), under 1.4.1 on Linux, it
2002 Mar 12
1
using R API in dynamically loaded code?
I'm probably missing something very basic here, but: I've written some C code that I load into R dynamically. In the course of this C code, I generate some multinomial random deviates. I initially used the publically available "randlib" library, which also implements its own random number generator and binomial deviates (which are used to generate the multinomial deviates).
2000 May 31
1
legend with multiple columns
I have made a minor hack to "legend" (in R 1.0.0, but I didn't notice any changes to legend in the 1.0.1 NEWS) to allow the legend to be formatted in multiple columns, or horizontally (number of columns <- number of legend items). (I find this helpful when I have lots of legend items and not a lot of vertical space to squeeze the legend into.) (Another hack I've considered
2002 Nov 08
2
behavior of =
I probably didn't follow the discussion of allowing "=" as an assignment operator closely enough, but I was a little bit horrified to discover today (using 1.6.0; I haven't upgraded to 1.6.1 yet) that x <- runif(20) y <- 1:20 y[x=min(x)] gives numeric(0) (because min(x) is non-integer). x <- sample(1:20,20,TRUE) y[x=min(x)] is even worse -- it gives the
2002 Oct 09
5
polynomial
Any better (more efficient, built-in) ideas for computing coef[1]+coef[2]*x+coef[3]*x^2+ ... than polynom <- function(coef,x) { n <- length(coef) sum(coef*apply(matrix(c(rep(x,n),seq(0,n-1)),ncol=2),1,function(z)z[1]^z[2])) } ? Ben -- 318 Carr Hall bolker at zoo.ufl.edu Zoology Department, University of Florida http://www.zoo.ufl.edu/bolker
2001 Sep 20
0
3d java etc.
There was some interest in the commands for creating an HTML file of 3D graphics that can be shown with a Java applet. Looking at things I discovered (of course) that I should really clean up quite a few things before releasing it for real. I hope to do some of that this weekend. In the meanwhile, here are a couple of pointers to the Java applet & documentation (apparently free for
2002 Feb 13
0
glmms with negative binomial responses
I am trying to find a way to analyze a "simple" mixed model with two levels of a treatment, a random blocking factor, and (wait for it) negative binomial count distributions as the response variable. As far as I can tell, the currently available R offerings (glmmGibbs, glmmPQL in MASS, and Jim Lindsey's glmm code) aren't quite up to this. From what I have read (e.g.
1999 Oct 18
1
memory efficiency in R
I'm trying to answer a question from a student about memory use in R (I won't go into the details right here). I have a really vague memory of having read a document, possibly by Venables or Ripley, discussing the awfulness of memory allocation in S-PLUS, and giving (in the context of a bootstrapping analysis of shoe size data??) some general strategies for conserving memory in S-PLUS.
2000 Sep 26
1
weights in nls
Does the nls package actually allow for weighted nonlinear regression? (i.e., I have data with individual variances associated, I'd like to use 1/var to weight the points.) The "nls" function does have a weights argument, but it doesn't seem to do anything as far as I can tell ... thanks ... Ben Bolker -- 318 Carr Hall bolker at
2003 Mar 31
2
Does R have an inverse wishart distribution?
If so, I''ve had trouble finding it. Can anyone help?
1999 Nov 22
0
No subject
This is off-topic (apologies), but I thought I might get a lead or two here. I'm interested in generating random deviates from a multivariate distribution which is a generalization of the beta distribution -- the Bayesian canonical distribution for the parameter estimates of a multinomial distribution. Given a vector (length n-1) of probabilities p and a vector (length n) of shape
2003 Mar 04
1
CRAN scripts?
For various reasons, I've opted to make my packages available from my own web page rather than submitting them to CRAN (mostly laziness -- for a long time I didn't have the packages quite cleaned up enough to pass all the tests). It occurred to me to wonder about the scripts used by CRAN maintainers to generate the PACKAGES file, and to generate PACKAGES.html from PACKAGES. Are
2001 May 16
0
glm.nb difficulties
I'm having problems (or to be precise a student is having problems, which I'm having problems helping her with) trying to use glm.nb() from the MASS package to do some negative binomial fits on a data set that is, admittedly, wildly overdispersed (some zeros and some numbers in the hundreds). glm.nb is failing to converge, and furthermore is (to my surprise) producing values of theta
2000 Feb 29
0
R-1.0.0
I want to add my two cents of congratulation to the R core team. I also want to encourage everyone who uses R to be an active, not a passive user -- the fastest way R will get better is if the folks who use it submit bug reports, suggestions, R code for their particular fields, documentation, even patches and code fixes. R is big and complicated enough now that we can't leave testing to
1999 Dec 09
0
setting par(fig) resets par(mfrow), par(mfcol)
Can we add a note to the documentation that setting par(fig) resets par(mfrow) and par(mfcol) to c(1,1)? Or are mfrow and mfcol now deprecated in favor of all the split screen stuff? (I was spending the morning trying to write some code that plotted multiple subplots within whatever plot region was active at the moment; I was able to set and reset fig successfully, but got very confused as to
2001 Oct 15
0
possible bugs: boundary conditions and random distribution parameters
There are a few inconsistencies, at least, in some of the functions that generate random deviates from particular distributions (I think they're bugs because they're inconvenient, but maybe someone can make an argument for the current behavior). If people think these are really bugs I can submit them, together or separately. 1. rlnorm(n,mean,sd) gives NaN for sd=0, rather than always
2002 Nov 26
0
nlme: gnls with weights and correlation arguments
Some students of mine are trying to use gnls, the generalized non-linear least squares function within the nlme library, to study evolutionary questions where correlations between traits at the species level are non-independent because of the evolutionary relatedness of the species. Specifically, they're using a non-linear function (log(sexual dimorphism) ~ log(a + b*variation in mating
2003 Jan 27
1
help page for anova.glm/variation between S-PLUS and R behavior
When using test="F" in stat.anova() / anova.glm(), R uses the assumed dispersion parameter for the specified family (e.g. scale=1 for binomial), while S-PLUS automatically uses the estimated dispersion parameter (residual deviance/residual df). I think there are good reasons for the behavior in R -- it fits with the "you get what you actually asked for" philosophy -- and