Displaying 3 results from an estimated 3 matches for "7500000".
Did you mean:
500000
2003 Dec 06
7
Windows Memory Issues
...ipulation of large data
objects. For example, I have a function which receives a (large)
numeric matrix, matches against more data (maybe imported from MySql)
and returns a large list structure for further analysis. A typical call
may look like this .
> myInputData <- matrix(sample(1:100, 7500000, T), nrow=5000)
> myPortfolio <- createPortfolio(myInputData)
It seems I can only repeat this code process 2/3 times before I have to
restart R (to get the memory back). I use the same object names
(myInputData and myPortfolio) each time, so I am not create more large
objects ..
I think...
2014 Oct 14
2
[LLVMdev] [RFC] Less memory and greater maintainability for debug info IR
...eing dealt with.
Metadata node counts stabilize much earlier in the process. The rest of
the numbers are based on counting `MDNodes` and their respective
`MDNodeOperands`, and multiplying by the cost of their operands. Here's
a dump from around the peak metadata node count:
LineTables = 7500000[30000000], InlinedLineTables = 6756182, Directives = 7611669[42389128], Arrays = 570609[577447], Others = 1176556[5133065]
Tag = 256, Count = 554992, Ops = 2531428, Name = DW_TAG_auto_variable
Tag = 16647, Count = 988, Ops = 4940, Name = DW_TAG_GNU_template_parameter_pack...
2014 Oct 13
9
[LLVMdev] [RFC] Less memory and greater maintainability for debug info IR
In r219010, I merged integer and string fields into a single header
field. By reducing the number of metadata operands used in debug info,
this saved 2.2GB on an `llvm-lto` bootstrap. I've done some profiling
of DW_TAGs to see what parts of PR17891 and PR17892 to tackle next, and
I've concluded that they will be insufficient.
Instead, I'd like to implement a more aggressive plan,