similar to: R-beta: read.table and large datasets

Displaying 20 results from an estimated 20000 matches similar to: "R-beta: read.table and large datasets"

2000 Mar 03
1
tapply, sorting and the heap
howdy gurus, I'm new and green and I was hoping for a tiny bit of your expertise. I'm running out of virtual memory (heap?) when summing using tapply. I've already used --vsize=90M on my hpux machine. (details below) Can I pre-sort or something to prevent my error? thanks, John Strumila john.strumila at corpmail.telstra.com.au > gc()["Vcells","total"] [1]
2007 Dec 08
1
FW: R memory management
Hi, I'm using R to collect data for a number of exchanges through a socket connection and constantly running into memory problems even though task I believe is not that memory consuming. I guess there is a miscommunication between R and WinXP about freeing up memory. So this is the code: for (x in 1:length(exchanges.to.get)) { tickers<-sqlQuery(channel,paste("SELECT Symbol
2001 Nov 20
1
trouble running pixmap examples
I am having trouble running the 'read.pnm' examples in this package. Can anyone tell me what is wrong? I am using the current package and running Redhat 7.2 (intel). The other examples seem fine but I can't seem to pull in files. [root at KENNY root]# R R : Copyright 2001, The R Development Core Team Version 1.3.0 (2001-06-22) R is free software and comes with ABSOLUTELY NO
2005 Feb 19
2
Memory Fragmentation in R
I have a data set of roughly 700MB which during processing grows up to 2G ( I'm using a 4G linux box). After the work is done I clean up (rm()) and the state is returned to 700MB. Yet I find I cannot run the same routine again as it claims to not be able to allocate memory even though gcinfo() claims there is 1.1G left. At the start of the second time ===============================
2005 Feb 19
2
Memory Fragmentation in R
I have a data set of roughly 700MB which during processing grows up to 2G ( I'm using a 4G linux box). After the work is done I clean up (rm()) and the state is returned to 700MB. Yet I find I cannot run the same routine again as it claims to not be able to allocate memory even though gcinfo() claims there is 1.1G left. At the start of the second time ===============================
1999 Jun 04
1
dealing with large objects -- memory wasting ?
Consider the following: > gcinfo(TRUE) [1] FALSE > rm(list=ls()); gc() free total Ncells 135837 250000 Vcells 747306 786432 ------ > n <- 10000; p <- 20; X <- matrix(rnorm(n*p), n,p); gc() Garbage collection [nr. 23]... 135839 cons cells free (54%) 4275 Kbytes of heap free (69%) free total Ncells 135829 250000 Vcells
1997 Oct 17
1
R-beta: memory problem vith "dist" on W95
Using Rseptbeta for Windows 95 I encountered this problem: > library(mva) > data(quakes) > dist(quakes) Error: memory exhausted I'm using a pentium 133 with 32 MB ram memory! What I must to do? Thanks and excuse me for my english! Andrea Rossetti, rossetti at stat.unipg.it _______________________________________________________ Statistica & Informatica per la Gestione delle
1999 Nov 25
1
segfault in garbage collection (PR#344)
The following statements yield a seg.fault: R --vanilla --nsize 500K x <- rep(letters,10000) f <- function(x) {z<-paste("\"",x,"\"",sep=""); z} y <- f(x) Segmentation fault If a turn gcinfo on, I get Garbage collection [nr. 1]... 387529 cons cells free (75%) 3807 Kbytes of heap free (62%) Garbage collection [nr. 2]... 273868 cons cells free
2003 Apr 17
2
HoltWinters() - p-values for alpha, beta and gamma
Need your expertise for the theoretical approach to deduce the p-values for the level, trend and seasonality parameters. I wonder if there's source code available. Thanks group. Kel
2000 Dec 21
1
read.table memory requirements
Hi, I tried to read in a big table (1.4M rows, 4 fields each) using read.table() and ran out of 'cons' memory with the following message: Error: cons memory (2800000 cells) exhausted Could someone please explain how to guess required nsize? My understanding of help(Memory) is that 'cons' memory should not be a limitation unless you create many "language" objects.
2018 Jan 13
3
How to use stack maps
Is there an explanation anywhere of what code that uses a stack map looks like? I'm interested in writing a garbage collector, but it's not clear to me how my code should make use of the stack map format to actually locate roots in memory. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2006 Jun 25
1
R memory size increases
O/S : Solaris 9 R version : 2.2.1 I was getting out of memory errors from R when running a large job, so I've switched to a larger machine with 40G shared memory. I issue the following command when starting R to increase memory available to R: R --save --min-vsize=4G --min-nsize=4G When reading in a file, R responds with "could not allocate vector of size 146Kb." So I'm
2000 Mar 17
2
Windows Memory
I'm sure this question is answered in the help file, but likely I'm not reading it corrected. Running windows version 1.00.0, loading a table (35K rows by 10 columns) from Excel using the read.table command I receive the following message. Error: cons memory (350000 cells) exhausted See "help(Memory)" on how to increase the number of cons cells. >From reading the
2007 Aug 20
2
[LLVMdev] ocaml+llvm
On Aug 14, 2007, at 4:35 AM, Gordon Henriksen wrote: > On Aug 14, 2007, at 06:24, Gordon Henriksen wrote: > >> The two major problems I had really boil down to identifying GC >> points in machine code and statically identifying live roots at >> those GC points, both problems common to many collection >> techniques. Looking at the problem from that perspective
2018 Jan 14
0
How to use stack maps
Hi, I implemented a garbage collector for a language I wrote in college using the llvm gc statepoint infrastructure. Information for statepoints: https://llvm.org/docs/Statepoints.html Example usage of parsing the llvm stackmap can be found at: https://github.com/dotnet/llilc/blob/master/lib/GcInfo/GcInfo.cpp https://llvm.org/docs/StackMaps.html#stackmap-format
2005 Dec 13
1
Technique for reading large sparse fwf data file
Dear list: A datafile was sent to me that is very large (92890 x 1620) and is *very* sparse. Instead of leaving the entries with missing data blank, each cell with missing data contains a dot (.) The data are binary in almost all columns, with only a few columns containing whole numbers, which I believe requires 2 bytes for the binary and 4 for the others. So, by my calculations (assuming 4
2007 Aug 20
0
[LLVMdev] ocaml+llvm
On Aug 19, 2007, at 20:43, Chris Lattner wrote: > On Aug 14, 2007, at 4:35 AM, Gordon Henriksen wrote: > >> On Aug 14, 2007, at 06:24, Gordon Henriksen wrote: >> >>> The two major problems I had really boil down to identifying GC >>> points in machine code and statically identifying live roots at >>> those GC points, both problems common to many
2005 Dec 08
2
data.frame() size
Hi, In the example below why is d 10 times bigger than m, according to object.size ? It also takes around 10 times as long to create, which fits with object.size() being truthful. gcinfo(TRUE) also indicates a great deal more garbage collector activity caused by data.frame() than matrix(). $ R --vanilla .... > nr = 1000000 > system.time(m<<-matrix(integer(1), nrow=nr, ncol=2)) [1]
2000 Apr 27
1
options(keep.source = TRUE) -- also for "library(.)" ?
> Subject: Re: [Rd] options(keep.source = TRUE) -- also for "library(.)" ? > From: Peter Dalgaard BSA <p.dalgaard@biostat.ku.dk> > Date: 27 Apr 2000 14:37:01 +0200 > > Martin Maechler <maechler@stat.math.ethz.ch> writes: > > > Can we [those of us who know how sys.source() works...] > > think of changing this? As it was possible for the base
2001 Dec 27
1
write.table and large datasets
Hi, I'll continue the discussion about the write.table() and problems with large datasets. The databases I have to work with are quite huge, 7500 obs x 1200 vars were on of the smallest of them. I usually write a perl script to preprocess them line-by-line and extract only the variables which I need later. This results into quite a manageable size but I have to have the dataset in ASCII