search for: 7gb

Displaying 20 results from an estimated 189 matches for "7gb".

Did you mean: 4gb
2013 Nov 12
3
[LLVMdev] Debug info: type uniquing for C++ and the status on building clang with "-flto -g"
Hi All, Type uniquing for C++ is in. Some data for Xalan with -flto -g: 9.9MB raw dwarf size, peak memory usage at 2.8GB The raw dwarf size was 58MB, memory usage was 7GB back in May, 2013. Other efforts at size reduction helped, and type uniquing improved on top of those. Data on building clang with "-flto -g" after type uniquing: 3.4GB MDNodes after parsing all bc files, 7GB MDNodes after linking all bc files 4.6GB DIEs 4G MCContext --> The m...
2013 Nov 12
3
[LLVMdev] Debug info: type uniquing for C++ and the status on building clang with "-flto -g"
...and progress plans - it's great to see the > impact your changes have had and ideas for future direction. > > Type uniquing for C++ is in. Some data for Xalan with -flto -g: >> 9.9MB raw dwarf size, peak memory usage at 2.8GB >> The raw dwarf size was 58MB, memory usage was 7GB back in May, 2013. >> Other efforts at size reduction helped, and type uniquing improved on top >> of those. >> >> Data on building clang with "-flto -g" after type uniquing: >> 3.4GB MDNodes after parsing all bc files, 7GB MDNodes after linking all >&gt...
2013 Nov 12
0
[LLVMdev] Debug info: type uniquing for C++ and the status on building clang with "-flto -g"
...sending this summary and progress plans - it's great to see the impact your changes have had and ideas for future direction. Type uniquing for C++ is in. Some data for Xalan with -flto -g: > 9.9MB raw dwarf size, peak memory usage at 2.8GB > The raw dwarf size was 58MB, memory usage was 7GB back in May, 2013. > Other efforts at size reduction helped, and type uniquing improved on top > of those. > > Data on building clang with "-flto -g" after type uniquing: > 3.4GB MDNodes after parsing all bc files, 7GB MDNodes after linking all > bc files > What...
2016 Mar 01
0
Possible Memory Savings for tools emitting large amounts of existing data through MC
...nker with a few extra bits. But the MCStreamer API means any bytes you write to the streamer stay in memory until you "Finish" - so if you're dwp/linking large enough inputs, you have them all in memory when you really don't need them. For example, the dwp file I was generating is 7GB, but the tool with the memory improvements only has a high water mark of 2.3GB. I’m a bit surprised by those numbers. If the output is 7GB, don’t you need to have a high watermark of 7GB at emission time even with your scheme? Also, in D17694 you mention that the memory peak goes from 9.6GB to 2....
2016 Feb 29
4
Possible Memory Savings for tools emitting large amounts of existing data through MC
...nker with a few extra bits. But the MCStreamer API means any bytes you write to the streamer stay in memory until you "Finish" - so if you're dwp/linking large enough inputs, you have them all in memory when you really don't need them. For example, the dwp file I was generating is 7GB, but the tool with the memory improvements only has a high water mark of 2.3GB. > memory usage wasn’t really a problem so far, but you could try running > llvm-dsymutil on bin/clang for a larger example (takes about a minute to > finish). > Was thinking of something more accessible t...
2018 Aug 02
2
[PATCH v2 2/2] virtio_balloon: replace oom notifier with shrinker
...07:34 PM, Michal Hocko wrote: >> Do you have any numbers for how does this work in practice? > > It works in this way: for example, we can set the parameter, balloon_pages_to_shrink, > to shrink 1GB memory once shrink scan is called. Now, we have a 8GB guest, and we balloon > out 7GB. When shrink scan is called, the balloon driver will get back 1GB memory and give > them back to mm, then the ballooned memory becomes 6GB. Since shrinker might be called concurrently (am I correct?), the balloon might deflate far more than needed if it releases such much memory. If shrinker is...
2018 Aug 02
2
[PATCH v2 2/2] virtio_balloon: replace oom notifier with shrinker
...07:34 PM, Michal Hocko wrote: >> Do you have any numbers for how does this work in practice? > > It works in this way: for example, we can set the parameter, balloon_pages_to_shrink, > to shrink 1GB memory once shrink scan is called. Now, we have a 8GB guest, and we balloon > out 7GB. When shrink scan is called, the balloon driver will get back 1GB memory and give > them back to mm, then the ballooned memory becomes 6GB. Since shrinker might be called concurrently (am I correct?), the balloon might deflate far more than needed if it releases such much memory. If shrinker is...
2009 Mar 19
2
Package HDF5
The package works fine, but it seems not to provide access to fields, but only to the whole data in a certain file (function hdf5load(file, load = TRUE, verbosity = 0, tidy = FALSE)) Since hdf5 organizes the data and metadata in a hierarchical structure, we must explore in our problem (file>7GB), but it seems not possible with the functions in the package. Any suggestions? Please, let me know about such capabilities or intentions to have them available in the future.
2008 Sep 25
3
SIze of reformatted USB drive
I just reformatted an 8Gb USB drive as ext3. While as FAT32, it was reported as having well over 7Gb free (did not note the exact capacity). I reformatted with mkfs.ext3 /dev/sda1 Now it is reported (oh, this is with properties in Nautilus) as having 6.8Gb capacity (free space actually). Does this makes sense that ext3 has less available space than fat32?
2013 Nov 12
0
[LLVMdev] Debug info: type uniquing for C++ and the status on building clang with "-flto -g"
...it's great to see >> the impact your changes have had and ideas for future direction. >> >> Type uniquing for C++ is in. Some data for Xalan with -flto -g: >>> 9.9MB raw dwarf size, peak memory usage at 2.8GB >>> The raw dwarf size was 58MB, memory usage was 7GB back in May, 2013. >>> Other efforts at size reduction helped, and type uniquing improved on >>> top of those. >>> >>> Data on building clang with "-flto -g" after type uniquing: >>> 3.4GB MDNodes after parsing all bc files, 7GB MDNodes after...
2011 Sep 05
1
Quota calculation
...al3020:/soft/venus Brick2: ylal3030:/soft/venus Brick3: yval1000:/soft/venus Brick4: yval1010:/soft/venus Options Reconfigured: nfs.port: 2049 performance.cache-refresh-timeout: 60 performance.cache-size: 1GB network.ping-timeout: 10 features.quota: on features.limit-usage: /test:100MB,/psa:200MB,/:7GB,/soft:5GB features.quota-timeout: 120 Size of each folder from the mount point : /test : 4.1MB /psa : 160MB /soft : 1.2GB Total size 1.4GB (If you want the complete output of du, don't hesitate) gluster volume quota venus list path limit_set size -------------------...
2017 Sep 18
1
Samba shows error NT Status: STATUS_OBJECT_NAME_NOT_FOUND when copying 10GB file using robocopy when ecryptfs file system shared using samba
Hi , I shared linux directory which is mounted using ecryptfs to a windows 10 client using samba share . When i do a robocopy of file greater than size of 7GB the samba throws an error NT Status: STATUS_OBJECT_NAME_NOT_FOUND which can be observed in wire shark  . Setup : ---------- Host with ubuntu 16.01  -------------------> windows 10 client (Samba server,                                                     (robocopy)   Raid 5 with ecryptfs )...
2013 Oct 01
0
[LLVMdev] Proposal: type uniquing of debug info for LTO
...roblem: > A single class can be used in multiple source files and the DI (Debug Info) class is included in multiple bc files. The duplication of > class definitions causes blow-up in # of MDNodes, # of DIEs, leading to large memory requirement. > > As an example, SPEC xalancbmk requires 7GB of memory when compiled with -flto -g. > With a preliminary implementation of type uniquing, the memory usage will be down to 2GB. This is awesome! :-) > In order to unique types, we have to break cycles in the MDNodes. Sorry I missed this email earlier, but do we really need to break the...
2012 Oct 29
3
mbox vs. maildir storage block waste
...act used space: 39683908608 mdir guess used space: 39871086592 mdir num mails: 3425033 delta: 1.561232384 G delta / mail: 455 B As you can see, the delta per mail is rather close to the statistically expected values of 2048B, 1024B and 512B. In the end I probably changed my opinion. ~7GB of wasted block space for all my mails is actually quite a lot, but in days of cheap disk space it's acceptable. And with mbox one has IMHO the major disadvantage that mailservers (including dovecot) store some meta-data _in_ it (i.e. in the mails themselves) , which I don't like a lot. I s...
2012 Apr 14
4
Doubt on XEN memory management: please clarify
...if I assign 256M? What if 8192M? For sure you can say to try, but it is not so clear to me what I am doing. As far as I understand, this would be the situation: dom0_mem=1024M implies that ''xl list'' shows that the dom0 has got 1024MB of RAM reserved for itself. Are the remaining 7GB available for the domUs and the host operating system? i.e.: this could be the case study: 1024M for the dom0 1024M for domU1 2048M for domU2 2048M for domU3 Total: 6144MB "occupied" So, the 2 remaining GB would be available for the System or other 1/2 domUs with 1/2GB of RAM each. Of...
2008 Aug 06
0
A file was strangly truncated on OCFS2
...-agent-2.2]# mount | grep OVS /dev/sda5 on /OVS type ocfs2 (rw,heartbeat=none) The problem is that one of the files for an VM image was truncated. The only thing that I currently know is that another file in the same filesystem was somehow being extended. An attempt was made to extend an image by 7GB using something similar to dd if=/dev/zero of=/OVS/file_to_be_increased oflag=append count=7000 bs=1048576 The end result is that another file (for another VM) has been truncated (strangely with 7GB). Can any of you recommend any option to debug the problem. Shall we have any hopes to eventually...
2006 Sep 06
1
Centos 4.4, grep breaking?
...-ri "Check Spelling & Mumble" *gz Actual results: Fails with: *** glibc detected *** free(): invalid next size (normal): *** Aborted. Sometimes: *** glibc detected *** realloc(): invalid next size: 0x000.... *** Aborted. Expected results: No failure. Additional info: Grepping 7GB of compressed logs on one server, grepping 4GB logs on other server. x86_64 Server had 35M or 3G+cache ram free; i686 server had 374M ram free, 960M+cache.
2018 Aug 01
4
[PATCH v2 2/2] virtio_balloon: replace oom notifier with shrinker
On Wed 01-08-18 19:12:25, Wei Wang wrote: > On 07/30/2018 05:00 PM, Michal Hocko wrote: > > On Fri 27-07-18 17:24:55, Wei Wang wrote: > > > The OOM notifier is getting deprecated to use for the reasons mentioned > > > here by Michal Hocko: https://lkml.org/lkml/2018/7/12/314 > > > > > > This patch replaces the virtio-balloon oom notifier with a
2018 Aug 01
4
[PATCH v2 2/2] virtio_balloon: replace oom notifier with shrinker
On Wed 01-08-18 19:12:25, Wei Wang wrote: > On 07/30/2018 05:00 PM, Michal Hocko wrote: > > On Fri 27-07-18 17:24:55, Wei Wang wrote: > > > The OOM notifier is getting deprecated to use for the reasons mentioned > > > here by Michal Hocko: https://lkml.org/lkml/2018/7/12/314 > > > > > > This patch replaces the virtio-balloon oom notifier with a
2017 Jul 28
1
[PATCH v12 5/8] virtio-balloon: VIRTIO_BALLOON_F_SG
...lists to the host. > > The implementation of the previous virtio-balloon is not very > efficient, because the balloon pages are transferred to the > host one by one. Here is the breakdown of the time in percentage > spent on each step of the balloon inflating process (inflating > 7GB of an 8GB idle guest). > > 1) allocating pages (6.5%) > 2) sending PFNs to host (68.3%) > 3) address translation (6.1%) > 4) madvise (19%) > > It takes about 4126ms for the inflating process to complete. > The above profiling shows that the bottlenecks are stage 2) > and...