Displaying 20 results from an estimated 68 matches for "1.2gb".
2015 Mar 17
2
Reduce memory peak when serializing to raw vectors
Hi,
I've been doing some tests using serialize() to a raw vector:
df <- data.frame(runif(50e6,1,10))
ser <- serialize(df,NULL)
In this example the data frame and the serialized raw vector occupy ~400MB each, for a total of ~800M. However the memory peak during serialize() is ~1.2GB:
$ cat /proc/15155/status |grep Vm
...
VmHWM: 1207792 kB
VmRSS: 817272 kB
We work with very
2015 Mar 17
2
Reduce memory peak when serializing to raw vectors
Presumably one could stream over the data twice, the first to get the size,
without storing the data. Slower but more memory efficient, unless I'm
missing something.
Michael
On Tue, Mar 17, 2015 at 2:03 PM, Simon Urbanek <simon.urbanek at r-project.org>
wrote:
> Jorge,
>
> what you propose is not possible because the size of the output is
> unknown, that's why a
2007 Dec 04
1
NOSSO(r) compression
I just downloaded solaris. They have two versions, the split DVD that
comes with two 1.2GB zipped chunks and an exe which is 1.2GB but
uncompresses to the full 2.5+GB. I found some info on them here:
http://www.nosltd.com/nosso.html
Any idea how they achieve such great compression? Looks like they're
using a proprietary algorithm and don't offer any downloads. Also looks
like
2006 Jun 14
6
memory limit?
I've got a simple (32bit) windows application (compiled in Borland c++
builder):
int *p;
while(1) {
p = new int[10000000]; //allocates 40 MB of memory
}
on Windows XP it crashes after 50 iterations (i.e. 2 GB allocated)
but on wine it crashes after 30 iterations (1200 MB allocated)
is it impossible to use 2gb of memory in wine? why only 1.2GB is
available?
I've got wine 0.9.13,
2015 Mar 17
0
Reduce memory peak when serializing to raw vectors
Jorge,
what you propose is not possible because the size of the output is unknown, that's why a dynamically growing PStream buffer is used - it cannot be pre-allocated.
Cheers,
Simon
> On Mar 17, 2015, at 1:37 PM, Martinez de Salinas, Jorge <jorge.martinez-de-salinas at hp.com> wrote:
>
> Hi,
>
> I've been doing some tests using serialize() to a raw vector:
>
2015 Mar 17
0
Reduce memory peak when serializing to raw vectors
Hi,
I've been doing some tests using serialize() to a raw vector:
df <- data.frame(runif(50e6,1,10))
ser <- serialize(df,NULL)
In this example the data frame and the serialized raw vector occupy ~400MB each, for a total of ~800M. However the memory peak during serialize() is ~1.2GB:
$ cat /proc/15155/status |grep Vm
...
VmHWM: 1207792 kB
VmRSS: 817272 kB
We work with very
2015 Mar 17
0
Reduce memory peak when serializing to raw vectors
In principle, yes (that's what Rserve serialization does), but AFAIR we don't have the infrastructure in place for that. But then you may as well serialize to a connection instead. To be honest I don't see why you would serialize anything big to a vector - you can't really do anything useful with that ... (what you couldn't do with the streaming version).
Sent from my iPhone
2010 Jul 04
4
OT: Problems updating OS to deal with fading R support for Ubuntu Hardy Heron...
Evening folks:
Not sure where this fits into the picture so bear with me for a few
moments while I think out loud to lay out the particulars of my problem.
-would love to be employed using R for a living but so far I have just
worked with it on my home machine to develop a basic level of skill with
it.
-I run Ubuntu linux 8.04 LTS on a machine with a 2.5GHz Celeron
processor and 1.2GB of memory.
2009 Jul 14
2
How to import BIG csv files with separate "map"?
Hi all,
I am having problems importing a VERY large dataset in R. I have looked into
the package ff, and that seems to suit me, but also, from all the examples I
have seen, it either requires a manual creation of the database, or it needs
a read.table kind of step. Being a survey kind of data the file is big (like
20,000 times 50,000 for a total of about 1.2Gb in plain text) the memory I
have
2009 Aug 14
1
libname version in R
Hello friends,
in SAS there is the 'libname' source. You associate a way.. ex: ab .. "C:\My
Paste\Works" and when u do
data ab.example;
(...)
run;
you are saving the archive "example" in a SAS format, and you can see
C:\My Paste\Works\example "growing" ("example"'s size:200MB... refresh..
500MB.. refresh.. 1.2GB...)
>How can I do
2009 Oct 13
1
Delay for --remove-source-files
Hi,
I'm using rsync with -aP --remove-source-files to move files from one
machine to another while watching the progress. I'm under the impression
that rsync is deleting the transmitted source files on-the-fly, not at the
very end, but with a delay of 2-3 files, i.e. if 10 files are moved the
first source file is deleted after the third of fourth file got transmitted.
However, if rsync is
2023 Oct 10
1
Is it possible to reduce the number of workers for rpcd_winreg?
Dear Samba Group,
I recently updates our samba addc's to 4.18.6 (from 4.7.7). We use roaming profile and a login script which queries active shares and printers on the logon servers. Since the update i see a lot of
rpcd_winreg processes if an user logs on. It are around 40 of these processes each consuming around 60MB. The servers have 1-2GB of ram assigned so they start swapping everytime an
2023 Oct 10
1
Is it possible to reduce the number of workers for rpcd_winreg?
As recommended by Volker Lendecke I added the option.
rpcd_winreg:idle_seconds = 5 Now on 2GB Servers swapping is avoided and on 1Gb Servers the server goes back to normal in ~5 seconds after the logon process has finished.
Am 10.10.2023 um 13:58 schrieb Achim Gottinger via samba:
> Dear Samba Group,
>
> I recently updates our samba addc's to 4.18.6 (from 4.7.7). We use roaming
2020 Jun 30
2
RFC: Adding a staging branch (temporarily) to facilitate upstreaming
To facilitate collaboration on an upstreaming effort (see "More context" below), we'd like to push a branch (with history) called "staging/apple" to github.com/llvm/llvm-project to serve as an official contribution to the LLVM project. This enables motivated parties to work with us to craft incremental patches for review on Phabricator. This branch would live during the
2004 May 14
1
help with memory greedy storage
Hello,
I've a problem with a self written routine taking a lot of memory (>1.2Gb). Maybe you can suggest some enhancements, I'm pretty sure that my implementation is not optimal ...
I'm creating many linear models and store coefficients, anova p-values ... all I need in different lists which are then finally returned in a list (list of lists).
The input is a matrix with 84 rows
2020 Jun 30
10
RFC: Adding a staging branch (temporarily) to facilitate upstreaming
On Mon, Jun 29, 2020 at 9:43 PM Mehdi AMINI via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> Hey Duncan,
>
> On Mon, Jun 29, 2020 at 8:28 PM Duncan Exon Smith via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
>> To facilitate collaboration on an upstreaming effort (see "More context"
>> below), we'd like to *push a branch* (with history)
2008 Aug 28
2
Fortress Forever, TFC and of course..Wine [Problems]
So, TFC works great under wine, no problems cept one, i can't connect to any servers. The world renders fine, my sound is fine, everything is good cept connection to server. I can browse the server list, see updated info from master server, i just can't join any games. Latest Wine.
Fortress Forever, i cannot even ping the master server, nor can i enter the game. I can browse menus and set
2009 Dec 15
1
samba4 size
Hi,
I've built samba 4 from the git repository, but... the resulting (stripped)
binaries take 504 MB disk space! Is that what it is or did I do something
wrong?
theHog
2008 Aug 09
1
Reading large datasets and fitting logistic models in R
Hi R-experts,
Does anyone have experience using R for handling large scale data (millions
of rows, hundreds or thousands of features)?
What is the largest size of data that anyone has used with glm?
Also, is there a library to read data in sparse data format (like SVMlight
format)?
Thanks
Pradheep
[[alternative HTML version deleted]]
2010 Jul 06
6
Xen 3.2.1-2 on Debian Lenny 2.6.26 2.6.26-24
Hi,
Recently I have installed Debian Lenny on two different machines (different
ram size, disks, Xeon dual and quad core, filesystems both xfs and ext3,
etc). Packages versions:
Dom0:
ii libc6-xen 2.7-18lenny4 GNU C
Library: Shared libraries [Xen version]
ii libxenstore3.0 3.2.1-2 Xenstore
communications