search for: descriptorfil

Displaying 11 results from an estimated 11 matches for "descriptorfil".

Did you mean: descriptorfile
2012 Feb 20
1
bigmemory not really parallel
Hi, all, I have a really big matrix that I want to run k-means on. I tried: >data <- read.big.memory('mydata.csv',type='double',backingfile='mydata.bin',descriptorfile='mydata.desc') I'm using doMC to register multicore. >library(doMC) >registerDoMC(cores=8) >ans<-bigkmeans(data,k) In system monitor, it seems only one thread running R. Is there anything I did wrong? Thanks in advance for any suggestions. Best, Lishu [[alternative HTML...
2009 Jun 02
2
bigmemory - extracting submatrix from big.matrix object
...h.s") Warning message: In filebacked.big.matrix(nrow = numRows, ncol = numCols, type = type, : A descriptor file has not been specified. A descriptor named backup.desc will be created. However there is no such argument in "read.big.matrix". Although there is an argument "descriptorfile" in the function "as.big.matrix" but if I try to use it in "read.big.matrix", I get an error showing it as unused argument (as expected). _Problem-2:_ I want to get a filebacked *sub*matrix of "x", say only selected columns: x[, 1:100]. Is there any way of...
2012 May 09
2
ergm model, nodematch with diff=T
Dear all, I am new to network analysis, but since I have good data I started to read about it and learned how to use the ergm and related packages. I generally get interesting results, but when I run a model including sociality and selective mixing effects for different groups, the model runs (and converges) but I get a warning as follows: mod <- ergm(network ~ edges + gwesp(0, fixed=T) +
2010 Aug 11
1
Bigmemory: Error Running Example
...e provided on the http://www.bigmemory.org/ The example runs on the "airline data" and generates summary of the csv files:- library(bigmemory) library(biganalytics) x <- read.big.matrix("2005.csv", type="integer", header=TRUE, backingfile="airline.bin", descriptorfile="airline.desc", extraCols="Age") summary(x) This runs fine for the provided csv for year 1987 (size=121MB). However, for big files like for year 2005 (size=639MB), it gives following errors:- Error in filebacked.big.matrix(nrow = nrow, ncol = ncol, type = type, : Problem...
2010 Dec 17
1
[Fwd: adding more columns in big.matrix object of bigmemory package]
...general, I want a way to modify an existing big.matrix object, i.e., add rows/columns, rename colnames, etc. I tried the following: > library(bigmemory) > x = read.big.matrix("test.csv",header=T,type="double",shared=T,backingfile="test .backup",descriptorfile="test.desc") > x[,"v4"] = "new" Error in mmap(j, colnames(x)) : Couldn't find a match to one of the arguments. (The above functionality is presently there in usual data.frames in R.) Thanks in advance, Utkarsh References 1. mailto:utkar...
2010 Jan 10
0
problems with bigmemory
...I am trying to read a large csv file (~11 Gb - ~900,000 columns, 3000 rows) using the read.big.matrix command from the bigmemory package. I am using the following command: x<-read.big.matrix('data.csv', sep=',', header=TRUE, type='char', backingfile='data.bin', descriptorfile='data.desc') When the command starts, everything seems to be fine, and when examining the CPU being used on my linux system, R is using 100%. Within about 10 minutes, the data.bin file is created, but then the CPU use drops to 0%, where it remains for at least 24 hours (I have let the pro...
2010 Dec 16
0
adding more columns in big.matrix object of bigmemory package
...general, I want a way to modify an existing big.matrix object, i.e., add rows/columns, rename colnames, etc. I tried the following: > library(bigmemory) > x = read.big.matrix("test.csv",header=T,type="double",shared=T,backingfile="test .backup",descriptorfile="test.desc") > x[,"v4"] = "new" Error in mmap(j, colnames(x)) : Couldn't find a match to one of the arguments. (The above functionality is presently there in usual data.frames in R.) Thanks in advance, Utkarsh
2009 Mar 18
1
Is it possible to make rsync VMware split .vmdk's aware?
...the vmdk config file for the split disks so it can search for known data across all the split .vmdk files within one virtual disk? If this is possible this will improve the rsync process in a major way! The .vmdk config file looks like this: Contents of "PVSBS2K3-1.vmdk": # Disk DescriptorFile version=1 CID=ee057ac0 parentCID=ffffffff createType="twoGbMaxExtentSparse" # Extent description RW 4192256 SPARSE "PVSBS2K3-1-s001.vmdk" RW 4192256 SPARSE "PVSBS2K3-1-s002.vmdk" RW 4192256 SPARSE "PVSBS2K3-1-s003.vmdk" RW 4192256 SPARSE "PV...
2010 Apr 23
2
bigmemory package woes
I have pretty big data sizes, like matrices of .5 to 1.5GB so once i need to juggle several of them i am in need of disk cache. I am trying to use bigmemory package but getting problems that are hard to understand. I am getting seg faults and machine just hanging. I work by the way on Red Hat Linux, 64 bit R version 10. Simplest problem is just saving matrices. When i do something like
2011 Sep 29
1
efficient coding with foreach and bigmemory
...m[start.i[i]:end.i[i], 2] <- rnorm(n*info$p[i], info$a1[i], info$a2[i]) } # example getting ready to scale up to large matrix n <- 50 end.i <- cumsum(n*info$p) start.i <- c(0, end.i[-nrowz]) + 1 m <- filebacked.big.matrix(nrow=n, ncol=2, backingfile="test3.bin", descriptorfile="test3.desc") m[start.i[1]:end.i[1], 1] <- foreach(i=start.i[1]:end.i[1], .combine=c) %do% runif(1, info$a1[1], info$a2[1]) m[start.i[2]:end.i[2], 1] <- foreach(i=start.i[2]:end.i[2], .combine=c) %do% runif(1, info$a1[2], info$a2[2]) m[start.i[3]:end.i[3], 1] <- foreach(i=sta...
2009 Oct 09
1
rsync, --sparse and VM disk images
...ut the vmdk config file for the split disks so it can search for known data across all the split .vmdk files within one virtual disk? If this is possible this will improve the rsync process in a major way! The .vmdk config file looks like this: Contents of "PVSBS2K3-1.vmdk": # Disk DescriptorFile version=1 CID=ee057ac0 parentCID=ffffffff createType="twoGbMaxExtentSparse" # Extent description RW 4192256 SPARSE "PVSBS2K3-1-s001.vmdk" RW 4192256 SPARSE "PVSBS2K3-1-s002.vmdk" RW 4192256 SPARSE "PVSBS2K3-1-s003.vmdk" RW 4192256 SPARSE "PVS...