search for: 89mb

Displaying 4 results from an estimated 4 matches for "89mb".

Did you mean: 80mb
2010 Dec 11
1
Feature request
...l server and 20 users worked with their mailboxes that time. But normal 20 imap session can't fork that amount of traffic (5.2Gb). After measuring local thunderbird's cached mailboxes we find out that one user have local INBOX about 19Gb and size is rising. On server side - this mailbox was 89Mb. And after killing thunderbird in logs appear: Dec 10 17:35:53 IMAP(user at domain 192.168.2.92): Info: Disconnected: Logged out bytes=1133593/1175103 Dec 10 17:35:53 IMAP(user at domain 192.168.2.92): Info: Disconnected: Logged out bytes=15964/44364 Dec 10 17:35:53 IMAP(user at domain 192.168.2.92...
2009 Nov 05
7
Unexpected ENOSPC on a SSD-drive after day of uptime, kernel 2.6.32-rc5
I''ve just finished installing onto an OCZ Agilent v2 SSD with btrfs as filesystem. However to my surprise I''ve hit an ENOSPC condition one one of the partitions within less than a day of uptime, while the filesystem on that partition only reported 50% to be in use, which is far from the 75% limit people mention on the ML. Note that this occurs using a vanilla 2.6.32-rc5 kernel
2009 Oct 19
0
EON ZFS Storage 0.59.4 based on snv_124 released!
...Genunix.org for download hosting and serving the opensolaris community. EON ZFS storage is available in a 32/64-bit CIFS and Samba versions: tryitEON 64-bit x86 CIFS ISO image version 0.59.4 based on snv_124 * eon-0.594-124-64-cifs.iso * MD5: 4bda930d1abc08666bf2f576b5dd006c * Size: ~89Mb * Released: Monday 19-October-2009 tryitEON 64-bit x86 Samba ISO image version 0.59.4 based on snv_124 * eon-0.594-124-64-smb.iso * MD5: 80af8b288194377f13706572f7b174b3 * Size: ~102Mb * Released: Monday 19-October-2009 tryitEON 32-bit x86 CIFS ISO image version 0.59.4 based...
2013 Feb 27
4
GlusterFS performance
...ome trouble: if I try to copy huge amount of files (94000 files, 3Gb size), this process takes terribly long time (from 20 to 40 minutes). I perform some tests and results is: Directly to storage (single 2TB HDD): 158MB/s Directly to storage (RAID1 of 2 HDDs): 190MB/s To Replicated gluster volume: 89MB/s To Distributed-replicated gluster volume: 49MB/s Test command is: sync && echo 3 > /proc/sys/vm/drop_caches && dd if=/dev/zero of=gluster.test.bin bs=1G count=1 Switching direct-io on and off doesn't have effect. Playing with glusterfs options too. What I can do with per...