Displaying 20 results from an estimated 43 matches for "gcinfo".
2018 Jan 14
0
How to use stack maps
Hi,
I implemented a garbage collector for a language I wrote in college using
the llvm gc statepoint infrastructure.
Information for statepoints:
https://llvm.org/docs/Statepoints.html
Example usage of parsing the llvm stackmap can be found at:
https://github.com/dotnet/llilc/blob/master/lib/GcInfo/GcInfo.cpp
https://llvm.org/docs/StackMaps.html#stackmap-format
https://github.com/llvm-mirror/llvm/blob/4604874612fa292ab4c49f96aedefdf8be1ff27e/include/llvm/Object/StackMapParser.h
Thanks,
River Riddle
On Sat, Jan 13, 2018 at 10:02 AM, benzrf via llvm-dev <
llvm-dev at lists.llvm.org>...
2018 Jan 13
3
How to use stack maps
Is there an explanation anywhere of what code that uses a stack map looks
like? I'm interested in writing a garbage collector, but it's not clear to
me how my code should make use of the stack map format to actually locate
roots in memory.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2007 Aug 20
0
[LLVMdev] ocaml+llvm
...milar to EH tables) that describe this, and
> the llvm code generator callback should enumerate these.
Yes, that's what I'm working on. Currently I have:
- Suppressed llvm.gcroot lowering in the LowerGC pass [may wish to
remove the pass from the default pipeline entirely]
- Added a GCInfo analysis
- Recorded in GCInfo the stack object indices for roots
Presumably, then, another pass can come along and use the analysis to
emit GC tables.
I'm still rummaging around the code generators trying to determine an
approach to identifying GC points in machine code which enables
li...
2007 Aug 20
2
[LLVMdev] ocaml+llvm
On Aug 14, 2007, at 4:35 AM, Gordon Henriksen wrote:
> On Aug 14, 2007, at 06:24, Gordon Henriksen wrote:
>
>> The two major problems I had really boil down to identifying GC
>> points in machine code and statically identifying live roots at
>> those GC points, both problems common to many collection
>> techniques. Looking at the problem from that perspective
2005 Dec 08
2
data.frame() size
Hi,
In the example below why is d 10 times bigger than m, according to
object.size ? It also takes around 10 times as long to create, which fits
with object.size() being truthful. gcinfo(TRUE) also indicates a great deal
more garbage collector activity caused by data.frame() than matrix().
$ R --vanilla
....
> nr = 1000000
> system.time(m<<-matrix(integer(1), nrow=nr, ncol=2))
[1] 0.22 0.01 0.23 0.00 0.00
> system.time(d<<-data.frame(a=integer(nr), b=integer(n...
2020 Nov 01
2
parallel PSOCK connection latency is greater on Linux?
...milar
hardware. Is there a reason for this difference and is there a way to
avoid the apparent additional Linux overhead?
I attempted to isolate the behavior with a test that simply returns an
existing object from the worker back to the main R session.
library(parallel)
library(microbenchmark)
gcinfo(TRUE)
cl <- makeCluster(1)
(x <- microbenchmark(clusterEvalQ(cl, iris), times = 1000, unit = "us"))
plot(x$time, ylab = "microseconds")
head(x$time, n = 10)
On Windows/MacOS, the test runs in 300-500 microseconds depending on
hardware. A few of the 1000 runs are an order...
2007 Aug 20
1
[LLVMdev] ocaml+llvm
...and the llvm code generator
>> callback should enumerate these.
>
> Yes, that's what I'm working on. Currently I have:
>
> - Suppressed llvm.gcroot lowering in the LowerGC pass [may wish to remove the
> pass from the default pipeline entirely]
Right.
> - Added a GCInfo analysis
> - Recorded in GCInfo the stack object indices for roots
>
> Presumably, then, another pass can come along and use the analysis to emit GC
> tables.
Makes sense!
> I'm still rummaging around the code generators trying to determine an
> approach to identifying GC...
2000 Apr 27
1
options(keep.source = TRUE) -- also for "library(.)" ?
> Subject: Re: [Rd] options(keep.source = TRUE) -- also for "library(.)" ?
> From: Peter Dalgaard BSA <p.dalgaard@biostat.ku.dk>
> Date: 27 Apr 2000 14:37:01 +0200
>
> Martin Maechler <maechler@stat.math.ethz.ch> writes:
>
> > Can we [those of us who know how sys.source() works...]
> > think of changing this? As it was possible for the base
2005 Feb 19
2
Memory Fragmentation in R
I have a data set of roughly 700MB which during processing grows up to
2G ( I'm using a 4G linux box). After the work is done I clean up (rm())
and the state is returned to 700MB. Yet I find I cannot run the same
routine again as it claims to not be able to allocate memory even though
gcinfo() claims there is 1.1G left.
At the start of the second time
===============================
used (Mb) gc trigger (Mb)
Ncells 2261001 60.4 3493455 93.3
Vcells 98828592 754.1 279952797 2135.9
Before Failing
==============
Garbage collection 459 = 312+51+96 (level 0)...
2005 Feb 19
2
Memory Fragmentation in R
I have a data set of roughly 700MB which during processing grows up to
2G ( I'm using a 4G linux box). After the work is done I clean up (rm())
and the state is returned to 700MB. Yet I find I cannot run the same
routine again as it claims to not be able to allocate memory even though
gcinfo() claims there is 1.1G left.
At the start of the second time
===============================
used (Mb) gc trigger (Mb)
Ncells 2261001 60.4 3493455 93.3
Vcells 98828592 754.1 279952797 2135.9
Before Failing
==============
Garbage collection 459 = 312+51+96 (level 0)...
1998 Mar 09
2
R-beta: read.table and large datasets
I find that read.table cannot handle large datasets. Suppose data is a
40000 x 6 dataset
R -v 100
x_read.table("data") gives
Error: memory exhausted
but
x_as.data.frame(matrix(scan("data"),byrow=T,ncol=6))
works fine.
read.table is less typing ,I can include the variable names in the first
line and in Splus executes faster. Is there a fix for read.table on the
way?
2000 Mar 03
1
tapply, sorting and the heap
...ual memory (heap?) when summing using tapply. I've
already used --vsize=90M on my hpux machine. (details below)
Can I pre-sort or something to prevent my error?
thanks,
John Strumila
john.strumila at corpmail.telstra.com.au
> gc()["Vcells","total"]
[1] 11796480
> gcinfo(TRUE)
[1] FALSE
> t1<-tapply(trace$elapsed,list(trace$pid,trace$hv,trace$transno),sum)
Garbage collection [nr. 11]...
104285 cons cells free (41%)
90082 Kbytes of heap free (98%)
Garbage collection [nr. 12]...
102205 cons cells free (40%)
90050 Kbytes of heap free (98%)
Garbage collection [nr...
1998 Mar 05
1
User time and system time
...428e-06 1.3824e-04 3.1205e-04
Relative gradient close to zero.
Current iterate is probably solution.
[1] 1.09 288.05 289.00 0.00 0.00
I can't really believe that it took only 1 second of user time and 288
seconds of system time. It didn't even do garbage collection in R (I
had gcinfo(T) set) and it wasn't swapping.
A call to proc.time() directly shows most of the time used is system
time.
R> proc.time()
[1] 16.30 577.01 13679.00 0.02 0.23
I looked at the code in src/unix/system.c and the structure seems
consistent. Has anyone else tested system.time() e...
2003 Jul 30
2
Should garbage collection be automatic in R sessions?
Hello all,
I am having problems with memory when running R on my PC. I do not
have many large objects in my workspace, and yet when trying to create a
new vector I often encounter this error message:
> lat <- header$lat[match(profile$id, header$id)]
Error: cannot allocate vector of size 4575 Kb
Since it seems like this may indicate that I don't have enough memory available, I
2020 Nov 02
3
parallel PSOCK connection latency is greater on Linux?
...nd is there a way to avoid the apparent additional Linux overhead?
> >
> > I attempted to isolate the behavior with a test that simply returns an existing object from the worker back to the main R session.
> >
> > library(parallel)
> > library(microbenchmark)
> > gcinfo(TRUE)
> > cl <- makeCluster(1)
> > (x <- microbenchmark(clusterEvalQ(cl, iris), times = 1000, unit = "us"))
> > plot(x$time, ylab = "microseconds")
> > head(x$time, n = 10)
> >
> > On Windows/MacOS, the test runs in 300-500 microseconds...
2005 Aug 04
3
Odd timing behaviour in reading a file
Hi all, please don't ask me why I tried this but.......
I have observed some odd behaviour in the time taken to read a file. I
tried searching the archives without much success, but that could be me.
The first time I read a (60Mb) CSV file, takes a certain amount of time.
The second time takes appreciably longer and the third and subsequent
times very much shorter times. See below,
$
2007 Dec 08
1
FW: R memory management
...}
if (length(data.)>3)
write.table(data.[1:(length(data.)-2)],paste(Working.dir,exchanges.to.get[x]
,'/',sub('\\*','\+',tickers[y]),'_.csv',sep=''),quote=F,col.names =
F,row.names=F)
close(con2)
}
rm(tickers)
gc()
With command gcinfo(TRUE) I got the following info (some examples) :
Garbage collection 16362 = 15411+754+197 (level 0) ...
6.3 Mbytes of cons cells used (22%)
2.2 Mbytes of vectors used (8%)
Garbage collection 16407 = 15454+756+197 (level 0) ...
13.1 Mbytes of cons cells used (46%)
10.4 Mbytes of vector...
2015 Jul 09
5
[LLVMdev] [RFC] New StackMap format proposal (StackMap v2)
...l 9, 2015, at 3:33 PM, Swaroop Sridhar <Swaroop.Sridhar at microsoft.com> wrote:
>
> Regarding Call-site size specification:
>
> CoreCLR (https://github.com/dotnet/coreclr <https://github.com/dotnet/coreclr>) requires the size of the Call-instruction to be reported in the GCInfo encoding.
>
> The runtime performs querries for StackMap records using instruction offsets as follows:
> 1) Offset at the end of the call instruction (offset of next instruction-1) if the call instruction occurs in code where GC can only take control at safe-points.
As part of this...
2005 Dec 09
3
[R] data.frame() size
...ct: Re: [R] data.frame() size
Matthew Dowle <mdowle at concordiafunds.com> writes:
> Hi,
>
> In the example below why is d 10 times bigger than m, according to
> object.size ? It also takes around 10 times as long to create, which
> fits with object.size() being truthful. gcinfo(TRUE) also indicates a
> great deal more garbage collector activity caused by data.frame() than
> matrix().
>
> $ R --vanilla
> ....
> > nr = 1000000
> > system.time(m<<-matrix(integer(1), nrow=nr, ncol=2))
> [1] 0.22 0.01 0.23 0.00 0.00
> > system.time(...
2020 Nov 04
2
parallel PSOCK connection latency is greater on Linux?
...al Linux overhead?
>>> >
>>> > I attempted to isolate the behavior with a test that simply returns an existing object from the worker back to the main R session.
>>> >
>>> > library(parallel)
>>> > library(microbenchmark)
>>> > gcinfo(TRUE)
>>> > cl <- makeCluster(1)
>>> > (x <- microbenchmark(clusterEvalQ(cl, iris), times = 1000, unit = "us"))
>>> > plot(x$time, ylab = "microseconds")
>>> > head(x$time, n = 10)
>>> >
>>> > On Windo...