search for: 216m

Displaying 6 results from an estimated 6 matches for "216m".

Did you mean: 2165
2004 Jul 22
4
0.99.10.x auth memory leak?
...rather needlessly and inefficiently (the LDAP DB memory footprint for ALL users is less than this and the box below just serves half of those). This box sees about 0.5 million POP3/IMAP logins/day. --- PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31235 root 16 0 220m 216m 5560 S 0.0 10.7 25:01.54 dovecot-auth 31234 root 16 0 205m 202m 5560 S 0.0 10.0 24:08.84 dovecot-auth 31231 root 16 0 200m 196m 5560 S 0.7 9.7 23:25.37 dovecot-auth 31232 root 16 0 196m 192m 5560 S 0.0 9.5 23:10.44 dovecot-auth 31233 root 15 0 179m 175m 556...
2015 Sep 04
4
RFC: Reducing Instr PGO size overhead
...assembly names and this can reduce binary/profile data size significantly. Function name's MD5 hash is a good candidate, and I have a patch (3 parts: runtime, reader/writer, clang) that does just that. The results are very promising. With this change, the clang instrumented binary size is now 216M (down from 280M); the raw profile size is only 40.1M (a 2.85X size reduction); and the indexed profile size is only 29.5M (a 2.2X size reduction). With the change, the indexed format size is smaller than GCC's (but the latter has value profile data). The binary size is still much larger than...
2006 Jan 30
1
df reports false size
...copying the data to an identical partition on a second harddisk: On this disk "du" and "df" both reported a size of about 4 GB, and not 7.6G, which is completely off the mark. # df -h / Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.6G 7.0G 216M 98% / # du -shx / 4.2G / # find / -xdev | wc -l 161021 # tune2fs -l /dev/sda1 tune2fs 1.35 (28-Feb-2004) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: a3f40d6f-51be-448b-bf71-76292772fea0 Filesystem magic number: 0xEF53 Filesys...
2007 Sep 05
1
VP6 issues in Swfdec
...k=0xb6a827a8 "????????ds\t", pixels=0xb611d6a8 '?' <repeats 200 times>..., line_size=1464, h=8) at i386/dsputil_mmx.c:416 #1 0xb71431d6 in vp56_decode_frame (avctx=0x80d6c30, data=0x80d24a0, data_size=0xbfa4b008, buf=0x8114980 "?\212????r9qy\234#]?\003??Q?]\214\216m??'\033g?\037??\200?\034?`wm<j?\221\\\f\025??6?\220\032\026Lp\232j\034I?;??U?\203?\230yPA\226*?^\001?\n\002?\221??\"?5H7Q????\236?<\026?\212M{\236??\231sQ\201+{\214\226?,??-?)\001?\017?\031??B?\a*|9?~\b\202g>z?ow,??\032^ ?x<??\237??\006bn?Q?\0044S8\033?#\214T-???=\202?z??\231\n\...
2013 Jun 13
4
puppet: 3.1.1 -> 3.2.1 load increase
Hi, I recently updated from puppet 3.1.1 to 3.2.1 and noticed quite a bit of increased load on the puppetmaster machine. I''m using the Apache/passenger/rack way of puppetmastering. Main symptom is: higher load on puppetmaster machine (8 cores): - 3.1.1: around 4 - 3.2.1: around 9-10 Any idea why there''s more load on the machine with 3.2.1? -- You received this
2012 Nov 03
0
mtrr_gran_size and mtrr_chunk_size
...32M chunk_size: 64M num_reg: 10 lose cover RAM: 40M gran_size: 32M chunk_size: 128M num_reg: 10 lose cover RAM: 40M gran_size: 32M chunk_size: 256M num_reg: 10 lose cover RAM: 40M *BAD*gran_size: 32M chunk_size: 512M num_reg: 10 lose cover RAM: -216M *BAD*gran_size: 32M chunk_size: 1G num_reg: 10 lose cover RAM: -472M *BAD*gran_size: 32M chunk_size: 2G num_reg: 10 lose cover RAM: -472M gran_size: 64M chunk_size: 64M num_reg: 8 lose cover RAM: 104M gran_size: 64M chunk_size: 128M num_reg: 8 l...