similar to: Zero mean correlation Matrix

Displaying 20 results from an estimated 8000 matches similar to: "Zero mean correlation Matrix"

2008 Feb 08
2
Applying lm to data with combn
http://www.nabble.com/file/p15359204/test.data.csv http://www.nabble.com/file/p15359204/test.data.csv test.data.csv Hi, I have used apply to have certian combinations, but when I try to use these combinations I get the error [Error in eval(expr, envir, enclos) : object "X.GDAXI" not found]. being a novice I donot understand that after applying combination to the data I cant access
2011 Dec 03
2
density function always evaluating to zero
Dear R users, I'm trying to carry out monte carlo integration of a posterior density function which is the product of a normal and a gamma distribution. The problem I have is that the density function always returns 0. How can I solve this problem? Here is my code #generate data x1 <- runif(100, min = -10, max = 10) y <- 2 * x1^2 + rnorm(100) # # # # # # # # Model 0 # # # # # # #
2006 Dec 03
1
passing matrix as argument to a C function
Hi, Although this is not directly an R-related question, it is relevant as I am trying to port some R code to C to speed things up in a computation. I am working through my first attempts to generate and link compiled C code in R. I could make the 'convolve' function to work and similar functions that take vectors as arguments. In my application I need to pass a couple of matrices to
2010 Mar 30
4
Code is too slow: mean-centering variables in a data frame by subgroup
Dear R-ers, I have a large data frame (several thousands of rows and about 2.5 thousand columns). One variable ("group") is a grouping variable with over 30 levels. And I have a lot of NAs. For each variable, I need to divide each value by variable mean - by subgroup. I have the code but it's way too slow - takes me about 1.5 hours. Below is a data example and my code that is too
2006 Aug 08
3
Pairwise n for large correlation tables?
Hello, I'm using a very large data set (n > 100,000 for 7 columns), for which I'm pretty happy dealing with pairwise-deleted correlations to populate my correlation table. E.g., a <- cor(cbind(col1, col2, col3),use="pairwise.complete.obs") ...however, I am interested in the number of cases used to compute each cell of the correlation table. I am unable to find such a
2012 Nov 07
5
Calling R object from R function
Hi, Can you please help me with this please? What I am trying to do is call a vector from R function and used in the new function So I create 4 functions with these arguments M11 <- function(TrainData,TestData,mdat,nsam) { ls <- list() I have few statments one of them is vectx <- c(,1,2,3,4,5,6,6) vectz <- c(12,34,5,6,78,9,90) and then................ ls(vectx=vtecx,vectz=vectz)
2010 Mar 27
1
R runs in a usual way, but simulations are not performed
Dear addresses, I need perform a batch of 10 000 simulations for each of 4 options considered. (The idea is to obtain the parameter estimates in a heteroskedastic linear regression model - with additive or mixed heteroskedasticity - via the Kenward-Roger small-sample adjusted covariance matrix of disturbances). For this purpose I wrote an R program which would capture all possible options (true
2006 Nov 21
1
crossprod(x) vs crossprod(x,x)
I found out the other day that crossprod() will take a single matrix argument; crossprod(x) notionally returns crossprod(x,x). The two forms do not return identical matrices: x <- matrix(rnorm(3000000),ncol=3) M1 <- crossprod(x) M2 <- crossprod(x,x) R> max(abs(M1-M2)) [1] 1.932494e-08 But what really surprised me is that crossprod(x) is slower than crossprod(x,x): R>
2005 Jan 27
3
the incredible lightness of crossprod
The following is at least as much out of intellectual curiosity as for practical reasons. On reviewing some code written by novices to R, I came across: crossprod(x, y)[1,1] I thought, "That isn't a very S way of saying that, I wonder what the penalty is for using 'crossprod'." To my surprise the penalty was substantially negative. Handily the client had S-PLUS as
2004 Oct 06
3
crossprod vs %*% timing
Hi the manpage says that crossprod(x,y) is formally equivalent to, but faster than, the call 't(x) %*% y'. I have a vector 'a' and a matrix 'A', and need to evaluate 't(a) %*% A %*% a' many many times, and performance is becoming crucial. With f1 <- function(a,X){ ignore <- t(a) %*% X %*% a } f2 <- function(a,X){ ignore <-
2003 Oct 17
2
Problems with crossprod
Dear R-users, I found a strange problem working with products of two matrices, say: a <- A[i, ] ; crossprod(a) where i is a set of integers selecting rows. When i is empty the result is in a sense random. After some trials the right answer (a matrix of zeros) appears. --------------- Illustration -------------------- R : Copyright 2003, The R Development Core Team Version 1.8.0
2005 Oct 05
2
eliminate t() and %*% using crossprod() and solve(A,b)
Hi I have a square matrix Ainv of size N-by-N where N ~ 1000 I have a rectangular matrix H of size N by n where n ~ 4. I have a vector d of length N. I need X = solve(t(H) %*% Ainv %*% H) %*% t(H) %*% Ainv %*% d and H %*% X. It is possible to rewrite X in the recommended crossprod way: X <- solve(quad.form(Ainv, H), crossprod(crossprod(Ainv, H), d)) where quad.form() is a little
2003 Sep 07
3
bug in crossprod? (PR#4092)
# Your mailer is set to "none" (default on Windows), # hence we cannot send the bug report directly from R. # Please copy the bug report (after finishing it) to # your favorite email program and send it to # # r-bugs@r-project.org # ###################################################### # The last line of following code produces a segmentation fault: x <- 1:10 f <- gl(5,2)
2004 Feb 16
4
Matrix mulitplication
ABCD are four matrix. A * Inverse((Transpose(A)*Tranpose(B)*B*A+C)) * Transpose(A) * Transpose(B) * D how to write in R in an efficient way? --------------------------------- [[alternative HTML version deleted]]
2002 Mar 15
1
Thought on crossprod
Hi everyone, I do a lot of work with large variance matrices, and I like to use "crossprod" for speed and to keep everything symmetric, i.e. I often compute "crossprod(Q %*% t(A))" for "A %*% Sigma %*% t(A)", where "Sigma" decomposes as "t(Q) %*% Q". I notice in the code that "crossprod", current definition > crossprod function (x,
2008 May 01
4
efficient code - yet another question
Dear list members; The code given below corresponds to the PCA-NIPALS (principal component analysis) algorithm adapted from the nipals function in the package chemometrics. The reason for using NIPALS instead of SVD is the ability of this algorithm to handle missing values, but that's a different story. I've been trying to find a way to improve (if possible) the efficiency of the code,
2005 Jan 24
1
Weighted.mean(x,wt) vs. t(x) %*% wt
What is the difference between the above two operations ? [[alternative HTML version deleted]]
2008 Mar 10
1
crossprod is slower than t(AA)%*BB
Dear Rdevelopers The background for this email is that I was helping a PhD student to improve the speed of her R code. I suggested to replace calls like t(AA)%*% BB by crossprod(AA,BB) since I expected this to be faster. The surprising result to me was that this change actually made her code slower. > ## Examples : > > AA <- matrix(rnorm(3000*1000),3000,1000) > BB <-
2010 Nov 09
2
Calculate Mean from List
Dear all, I have a list of correlation coefficient matrixes. Each matrix represents one date. For example A[[1]] A B C A 1 0.2 0.3 B 0.2 1 0.4 C 0.3 0.4 1 A[[2]] A B C A 1 0.5 0.6 B
2010 May 08
1
matrix cross product in R different from cross product in Matlab
Hi all, I have been searching all sorts of documentation, reference cards, cheat sheets but can't find why R's crossprod(A, B) which is identical to A%*%B does not produce the same as Matlabs cross(A, B) Supposedly both calculate the cross product, and say so, or where do I go wrong? R is only doing sums in the crossprod however, as indicated by (z <- crossprod(1:4)) # = sum(1 +