similar to: R 1.5.1 on AIX 5.1

Displaying 20 results from an estimated 1000 matches similar to: "R 1.5.1 on AIX 5.1"

2012 Apr 23
0
linear model benchmarking
I cleaned up my old benchmarking code and added checks for missing data to compare various ways of finding OLS regression coefficients. I thought I would share this for others. the long and short of it is that I would recommend ols.crossprod = function (y, x) { x <- as.matrix(x) ok <- (!is.na(y))&(!is.na(rowSums(x))) y <- y[ok]; x
2003 Oct 31
1
R-1.8.0 + IBM VisualAge/C for AIX compiler
A while ago I compiled R 1.7.0 for AIX (with the above compiler - I'll call it xlc) and I was surprised that it went quite smoothly. Unfortunately with R 1.8.0 it's not as easy, but I succeeded at least partially. Static R works fine (after some tweaking), but --enable-R-shlib fails resp. produces a buggy R. Following are the problems I encountered (in a warning-to-fatal-error
2006 Feb 08
0
bayesm, rmnlIndepMetrop
Hi, I tried to use rmnlIndepMetrop (bayesm package) for my MNL model with 4 choice alternatives, 5 independent variables, 69 observations, dim(X) [1] 276 5, nu=6. So I run such code: if(nchar(Sys.getenv("LONG_TEST")) != 0) {R=2000} else {R=10} set.seed(66) df=read.table("X_metrop.dat",header=TRUE) inp=as.matrix(df) y=as.numeric(inp[,1]) n=length(y) p=4
2010 Mar 27
1
R runs in a usual way, but simulations are not performed
Dear addresses, I need perform a batch of 10 000 simulations for each of 4 options considered. (The idea is to obtain the parameter estimates in a heteroskedastic linear regression model - with additive or mixed heteroskedasticity - via the Kenward-Roger small-sample adjusted covariance matrix of disturbances). For this purpose I wrote an R program which would capture all possible options (true
2006 Nov 21
1
crossprod(x) vs crossprod(x,x)
I found out the other day that crossprod() will take a single matrix argument; crossprod(x) notionally returns crossprod(x,x). The two forms do not return identical matrices: x <- matrix(rnorm(3000000),ncol=3) M1 <- crossprod(x) M2 <- crossprod(x,x) R> max(abs(M1-M2)) [1] 1.932494e-08 But what really surprised me is that crossprod(x) is slower than crossprod(x,x): R>
2002 Mar 15
1
Thought on crossprod
Hi everyone, I do a lot of work with large variance matrices, and I like to use "crossprod" for speed and to keep everything symmetric, i.e. I often compute "crossprod(Q %*% t(A))" for "A %*% Sigma %*% t(A)", where "Sigma" decomposes as "t(Q) %*% Q". I notice in the code that "crossprod", current definition > crossprod function (x,
2003 Nov 26
0
RE: 64-bit R on Opteron [was Re: [R] Windows R 1.8.0 hangs when M em Usage >1.8GB]
> From: Douglas Bates > > How does the Opteron perform on floating point? Can you try something > like > > > mm = matrix(rnorm(1e6), nc = 1e3) > > system.time(crossprod(mm)) > [1] 0.51 0.02 0.53 0.00 0.00 > > system.time(crossprod(mm)) > [1] 0.37 0.03 0.40 0.00 0.00 > > system.time(crossprod(mm)) > [1] 0.38 0.02 0.40 0.00 0.00 > >
2006 Dec 10
1
Problem with loading "library(Matrix)" at Ubuntu
Dear All, After upgrading to R-2.4.0-dapper2 (my system is ubuntu 6.06 LTS), I often met problems when loading some packages like Matrix. Here is the details: > library(Matrix) Error in loadNamespace(package, c(which.lib.loc, lib.loc), keep.source = keep.source) : in 'Matrix' methods specified for export, but none defined: Arith, Math, Math2, +, %*%, Schur, as.matrix, chol,
2005 Feb 02
0
Not reproducing GLS estimates
Dear List: I am having some trouble reproducing some GLS estimates using matrix operations that I am not having with other R procedures. Here are some sample data to see what I am doing along with all code: mu<-c(100,150,200,250) Sigma<-matrix(c(400,80,16,3.2,80,400,80,16,16,80,400,80,3.2,16,80,400),n c=4) sample.size<-100 temp <-
2005 Oct 05
2
eliminate t() and %*% using crossprod() and solve(A,b)
Hi I have a square matrix Ainv of size N-by-N where N ~ 1000 I have a rectangular matrix H of size N by n where n ~ 4. I have a vector d of length N. I need X = solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d and H %*% X. It is possible to rewrite X in the recommended crossprod way: X <- solve(quad.form(Ainv, H), crossprod(crossprod(Ainv, H), d)) where quad.form() is a little
2005 Jan 27
3
the incredible lightness of crossprod
The following is at least as much out of intellectual curiosity as for practical reasons. On reviewing some code written by novices to R, I came across: crossprod(x, y)[1,1] I thought, "That isn't a very S way of saying that, I wonder what the penalty is for using 'crossprod'." To my surprise the penalty was substantially negative. Handily the client had S-PLUS as
2003 Oct 17
2
Problems with crossprod
Dear R-users, I found a strange problem working with products of two matrices, say: a <- A[i, ] ; crossprod(a) where i is a set of integers selecting rows. When i is empty the result is in a sense random. After some trials the right answer (a matrix of zeros) appears. --------------- Illustration -------------------- R : Copyright 2003, The R Development Core Team Version 1.8.0
2010 May 08
1
matrix cross product in R different from cross product in Matlab
Hi all, I have been searching all sorts of documentation, reference cards, cheat sheets but can't find why R's crossprod(A, B) which is identical to A%*%B does not produce the same as Matlabs cross(A, B) Supposedly both calculate the cross product, and say so, or where do I go wrong? R is only doing sums in the crossprod however, as indicated by (z <- crossprod(1:4)) # = sum(1 +
2004 Oct 06
3
crossprod vs %*% timing
Hi the manpage says that crossprod(x,y) is formally equivalent to, but faster than, the call 't(x) %*% y'. I have a vector 'a' and a matrix 'A', and need to evaluate 't(a) %*% A %*% a' many many times, and performance is becoming crucial. With f1 <- function(a,X){ ignore <- t(a) %*% X %*% a } f2 <- function(a,X){ ignore <-
2008 Mar 10
1
crossprod is slower than t(AA)%*BB
Dear Rdevelopers The background for this email is that I was helping a PhD student to improve the speed of her R code. I suggested to replace calls like t(AA)%*% BB by crossprod(AA,BB) since I expected this to be faster. The surprising result to me was that this change actually made her code slower. > ## Examples : > > AA <- matrix(rnorm(3000*1000),3000,1000) > BB <-
2003 Dec 02
0
names of parameters from nonlinear model?
I've been trying to figure out how to build a list of terms from a nonlinear model (terms() returns a error). I need to compute and evaluate the partial derivatives (Jacobian) for each equaiton in a set of equations. For example: > eqn <- q ~ s0 + s1 * p + s2 * f + s3 * a > sv2 <- c(d0=3,d1=4.234,d2=4,s0=-2.123,s1=0.234,s2=2.123,s3=4.234) > names( sv2 ) [1] "d0"
2006 Sep 07
2
Matrix package in R-2.4.0alpha
In a newly downloaded version (today) of R-2-4-0alpha, with all packages from CRAN also installed today, I get: > library(Matrix) Erro en loadNamespace(package, c(which.lib.loc, lib.loc), keep.source = keep.source) : in 'Matrix' methods specified for export, but none defined: BIC, anova, coef, confint, deviance, fitted, fixef, formula, head, lmer, logLik, mcmcsamp, plot,
2002 Jul 14
1
crossprod and X %*% t(X)
hi, the help page for crossprod states that crossprod(A,B) is faster than t(A) %*% B; experimentation certainly bears this out. more alarming is the evidence that crossprod(t(A), B) is faster than A %*% B: on a PII laptop, 128MB memory, win98, R-1.5.0.-patched precompiled (no ATLAS): > A <- matrix(rnorm(250000),500,500) > B <- matrix(rnorm(250000),500,500) > for (i in 1:5) {
2004 May 03
3
R 1.9.0 on AIX, 64-bit
I'm trying to get R 1.9.0 running on AIX 5.1 with the standard AIX compilers (xlc, xlf) and it is failing 2 of the tests, test-Reg in reg-tests-1.R like this: bash-2.05b$ tail -30 reg-tests-1.Rout.fail [,1] [,2] [1,] 1 3 [2,] 2 4 [3,] 1 3 [4,] 2 4 > stopifnot(typeof(res) == "list") > ## were not implemented in 1.8.1 > > > ## Date objects with
2016 Mar 24
0
summary( prcomp(*, tol = .) ) -- and 'rank.'
Martin, I fully agree. This becomes an issue when you have big matrices. (Note that there are awesome methods for actually only computing a small number of PCs (unlike your code which uses svn which gets all of them); these are available in various CRAN packages). Best, Kasper On Thu, Mar 24, 2016 at 1:09 PM, Martin Maechler <maechler at stat.math.ethz.ch > wrote: > Following from