Jason Liao
2007-Feb-21 18:38 UTC
[R] how much performance penalty does this incur, scalar as a vector of one element?
I have been comparing R with other languages and systems. One peculiar feature of R is there is no scalar. Instead, it is just a vector of length one. I wondered how much performance penalty this deign cause, particular in situations with many scalars in a program. Thanks. Jason Liao, http://www.geocities.com/jg_liao Associate Professor of Biostatistics Drexel University School of Public Health 245 N. 15th Street, Mail Stop 660 Philadelphia, PA 19102-1192 phone 215-762-3934 ____________________________________________________________________________________ TV dinner still cooling? Check out "Tonight's Picks" on Yahoo! TV.
Luke Tierney
2007-Feb-22 14:36 UTC
[R] how much performance penalty does this incur, scalar as a vector of one element?
I think the short answer is not much. Longer answer: In an interpreted framework with double precision floating point scalars there is little chance of avoiding fresh allocations for each scalar; given that, the overhead associated with length checks can be made negligible. (That isn't to say it currently is--it may or may not be, but you asked about design.) Systems that support integer scalars often represent them as immediate values within pointers by sacrificing one or two bits of precision in the integers, but that doesn't work for double precision floats except possibly on 64-bit systems. Though even there it would be possible to use an efficient internal representation of vectors of length one without changing the concept that everything is a vector. As we think about compilation there are opportunities to produce more efficient code if values can be assumed to be scalars, but that can be accomplished by adding a declaration mechanism. So again the answer in terms of efficiency cost is not much. The APL view of everything as an array, with zero-dimensional arrays being scalars and higher-dimensional arrays being real entities rather than decorated vectors, is in many ways conceptually cleaner and might in hindsight have been a better choice for that reason, but efficiency isn't really a consideration. Best, luke On Wed, 21 Feb 2007, Jason Liao wrote:> > I have been comparing R with other languages and systems. One peculiar feature of R is there is no scalar. Instead, it is just a vector of length one. I wondered how much performance penalty this deign cause, particular in situations with many scalars in a program. Thanks. > > > > Jason Liao, http://www.geocities.com/jg_liao > Associate Professor of Biostatistics > Drexel University School of Public Health > 245 N. 15th Street, Mail Stop 660 > Philadelphia, PA 19102-1192 > phone 215-762-3934 > > > > > > ____________________________________________________________________________________ > TV dinner still cooling? > Check out "Tonight's Picks" on Yahoo! TV. > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. >-- Luke Tierney Chair, Statistics and Actuarial Science Ralph E. Wareham Professor of Mathematical Sciences University of Iowa Phone: 319-335-3386 Department of Statistics and Fax: 319-335-3017 Actuarial Science 241 Schaeffer Hall email: luke at stat.uiowa.edu Iowa City, IA 52242 WWW: http://www.stat.uiowa.edu
Jason Liao
2007-Feb-27 18:58 UTC
[R] how much performance penalty does this incur, scalar as a vector of one element?
Dear Prof. Tierney, thank you very much to answer my question. It is good to know that the loss of efficiency can be small. I came to this question after using R to implement a few low level algorithm: KD-tree and recursive algorithm for conditional Poisson binomial. The R's speed has been slow and even much slower than Ruby. I love R dearly and always tell my students that it is the best thing that ever happened to statistics. R is much more elegant than C or Fortran. Unfortunately Fortran or C is still needed when speed is a concern and a statistician has then to confront the ugly and complex large world. A huge gain in productivity and reduction in mental anguish can be achieved If R's speed can be improved via compilation. I did a little research. The following tool claims to make Python as fast as C http://www-128.ibm.com/developerworks/linux/library/l-psyco.html Recently, a new Ruby implementation makes it several times faster: http://www.antoniocangiano.com/articles/2007/02/19/ruby-implementations-shootout-ruby-vs-yarv-vs-jruby-vs-gardens-point-ruby-net-vs-rubinius-vs-cardinal Jason Liao, http://www.geocities.com/jg_liao Associate Professor of Biostatistics Drexel University School of Public Health 245 N. 15th Street, Mail Stop 660 Philadelphia, PA 19102-1192 phone 215-762-3934 ____________________________________________________________________________________ Expecting? Get great news right away with email Auto-Check.
Apparently Analagous Threads
- fastest way to compute the squared Euclidean distance between two vectors in R
- extremely slow recursion in R?
- Linking R with Fortran 90: make: m2c: Command not found
- help needed: taking a function out of a package and it can not find some funtions
- help needed: taking a function out of a package and it can not find some funtions