Displaying 9 results from an estimated 9 matches for "34gb".
Did you mean:
32gb
2010 Apr 06
3
svm of e1071 package
...mory?? do i have anything to do to solve this problem or its a problem in e1071 package ? by "problem in e1071 package", i mean does svm() in e1071 normally consume that high amount of memory? if svm() really consume this much memory then i have to think of some other way to train svm. if 34gb ram is not enough for 1.4 gb of data then i am in trouble. Amazon has maximum 68.4gb ram.
Please help. Thanks in advance.
Regards
Shyama
2013 May 11
4
Defragmentation of large files
...ents. If I run btrfs
fi defrag -v /path/to/file.vmdk, it returns immediately with no messages
but an exit status of 20, and running filefrag again shows that no
defragmentation has taken place.
This is on Ubuntu 13.04, kernel 3.9.0-rc8 and v0.20-rc1 of the tools,
and the file in the example is 34GB.
Any ideas what''s happening here?
Cheers,
---tim
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
2010 Apr 22
1
Odd behavior
...539 root 16 0 30136 19m 820 R 6.3 0.0 6:29.28 rsync
9540 root 15 0 271m 46m 260 S 0.3 0.1 0:12.13 rsync
10047 root 15 0 10992 1212 768 R 0.3 0.0 0:00.01 top
1 root 15 0 10348 700 592 S 0.0 0.0 0:02.15 init
...etc...
But nevertheless, 34GB RAM is in use. But what really kills things is
that at some point, each rsync all of a sudden ramps up to 100% CPU
usage, and the all activity for that rsync essentially stops. In the
above example, 2 of the 3 rsyncs are in that 100% CPU state, while the
third rsync is only at 6.3%, but that...
2007 Jun 16
5
zpool mirror faulted
I have a strange problem with a faulted zpool (two way mirror):
[root at einstein;0]~# zpool status poolm
pool: poolm
state: FAULTED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
poolm UNAVAIL 0 0 0 insufficient replicas
mirror UNAVAIL 0 0 0 corrupted data
c2t0d0s0 ONLINE 0
2024 Aug 18
0
[Bug 3720] New: ssh-keygen -R fails and/or leaves temp files when run concurrently
...concurrently could also run into issues.
For context, I have a server running an automated process that, long
story short, runs `ssh-keygen -R` a few hundred times every 20 minutes
or so (don't ask), and I recently discovered 750,000 temporary files
left over in ~/.ssh taking up approximately 34GB of hard drive space. I
solved the problem by removing the concurrency on `ssh-keygen` runs.
Note also that `ssh` itself does *not* appear to have concurrency
issues when adding hosts to the known_hosts file.
--
You are receiving this mail because:
You are watching the assignee of the bug.
2023 Jul 27
1
High memory consumption for small AXFR
...dedicated NSD process. The server has 40GB RAM. Without .test the server has ~20GB RAM consumption.
Testing:
1. AXFR of test. zone with 5RR -> Memory consumption stable at 20GB
2. AXFR-style IXFR of test. zone with 50mio RRs (only NS records) -> memory consumption increased by ~14GB RAM to 34GB RAM
15:05:46 nsd-trial[635021]: xfrd: zone test committed "received update to serial 1690380825 at 2023-07-26T15:05:46 from xxx TSIG verified with key yyy"
15:13:53 nsd-trial[635022]: zone test. received update to serial 1690380825 at 2023-07-26T15:05:46 from xxx TSIG verified with key yy...
2009 Jan 15
5
real HDD usage of XEN images
Hello,
i am creating my XEN VM with virt-install (see below).
When I create new images i do first an "df -h" to see if there is
still enough space left on the drive.
Are the XEN images pre allocated or does XEN only use that space that
really is used by the VM inside the image?
I know have the Problem that an "du -h" inside my /VM folder gives me
nearly a higher number that
2019 Oct 11
2
[RFC] Propeller: A frame work for Post Link Optimizations
Is there large value from deferring the block ordering to link time? That
is, does the block layout algorithm need to consider global layout issues
when deciding which blocks to put together and which to relegate to the
far-away part of the code?
Or, could the propellor-optimized compile step instead split each function
into only 2 pieces -- one containing an "optimally-ordered" set of
2019 Oct 14
2
[RFC] Propeller: A frame work for Post Link Optimizations
...n of BOLT. Memory consumption increases rapidly with binary sizes and on a large binary with 350M of text consumes ~70G of RAM (older BOLT version). Even when running several times changing the thread count from 1 to 70, the running time and memory overhead respectively remain over 198 seconds and 34GB (Multithreading reduces BOLT’s time overhead by at most 15% but also increases memory overhead by 10% to upto 38 GB).
+ Overheads of CFI
TLDR; clang is pathological and the actual CFI bloat will go down from 7x to 2x.
Let us present the CFI Bloats for each benchmark with the default option, whi...