search for: 27gb

Displaying 10 results from an estimated 10 matches for "27gb".

Did you mean: 20gb
2007 Nov 17
1
mailbox size limit
Hi guys, I was wondering... my Inbox is 1.3GB large, my mailbox size limit is 6GB (6442450941). Other users have 500+ MB in their Inbox. I have about 27GB of mail. I'm running dovecot 1.0.7 now under Fedora 7 with postfix. I have no trouble at all, everything is working perfect. I use ext3 as my file system, and I store mail in maildir format. Will I have any inode, file system, issues? Is anyone storing more then 27GB in maildir format on only...
2010 May 03
1
xentrace
...my Xen system and discovered a huge difference in the performance of a "xened" SAP system compared to a native SAP system. Hence, I tried to figure out what might cause this 'overhead' and run a xentrace (listining to all events). Xentrace produced 24gb data and I converted it to 27gb human-readable data. After I gathered the human-readable data, I filtered the data and counted the appearance of each event. So far, so good. Now it comes: although I used paravirt-guests, the xentrace-tool reported HVM events in the trace data. Moreover, from my point of view it is impossible to...
2016 Dec 07
3
Setting up replication - First steps...
...ing Dovecot and Postfix with no issues, but want to start taking steps just to be safe. I currently run a filesystem backup every 24 hours to a tar file over NFS to another server in our rack. I am backing up: /home/vmail /etc/dovecot /etc/postfix Unfortunately, the vmail directory has grown to 27GB and takes around 7 hours now to backup as described above. Which leads me to start thinking about how quickly I could restore the server from a backup if need be, and that time is at least 7 hours just to copy and untar the files onto another hard drive. I?m sure I could hook up a HD up directly t...
2007 Jun 16
5
zpool mirror faulted
I have a strange problem with a faulted zpool (two way mirror): [root at einstein;0]~# zpool status poolm pool: poolm state: FAULTED scrub: none requested config: NAME STATE READ WRITE CKSUM poolm UNAVAIL 0 0 0 insufficient replicas mirror UNAVAIL 0 0 0 corrupted data c2t0d0s0 ONLINE 0
2010 Aug 27
6
Samba and file locking
Are their issues with Samba and Lustre working together? I remember something about turning oplocks off in samba, and while testing samba I noticed this [2010/08/27 17:30:59, 3] lib/util.c:fcntl_getlock(2064) fcntl_getlock: lock request failed at offset 75694080 count 65536 type 1 (Function not implemented) But I also found out about the flock option for lustre. Should I set flock on all
2001 Oct 25
2
inode limit ?
Hello, I'm using a defualt everything install of 7.2 (kernel 2.4.7-10 #1) with a 27 gig ext3 / partition. The problem that I am experiencing is this: If I create more than (about) 3.5 million distinct files on the partition, touch, mkdir, cp and all other file creation methods complain that there is no available space on the disk. A df shows me that the partition is only 65% full.
2016 Apr 18
1
[RFC] Lazy-loading of debug info metadata
...inish this off -- they've been mostly ready since Saturday -- but I had a big surprise when I finally configured correctly (apparently the Apple CMake caches default to line-tables only for RelWithDebInfo): 30GB peak for linking clang. Then I went back and ran a fresh bootstrap of ToT, and got 27GB. This is pretty terrible (we were down to around 17GB back in October). I'd been doing spot checks of memory usage (glancing at top) and it was surprisingly low; I just assumed someone had made improvements when I wasn't looking. Shame on me :(. (I do I have a bot setup that is supposed...
2016 Apr 16
4
[RFC] Lazy-loading of debug info metadata
On Fri, Apr 15, 2016 at 4:04 PM, Duncan P. N. Exon Smith < dexonsmith at apple.com> wrote: > > > On 2016-Apr-15, at 14:53, David Blaikie <dblaikie at gmail.com> wrote: > > > > > > > >> On Fri, Apr 15, 2016 at 2:27 PM, Duncan P. N. Exon Smith < > dexonsmith at apple.com> wrote: > >> > >> > On 2016-Apr-15, at 10:27, David
2007 Mar 19
6
Best way to migrate from Qpopper to Dovecot
Hi List, what do you think is the best way to migrate (to a new maschine) round about 30000 mboxes (Qpopper) with an amount of 43Gigs data, to maildir format (Dovecot). I think there are two ways: 1. - stop services (smtp and pop3) on the old maschine - copy the mboxes to the new maschine - run a conversion script (for excample: "Perfect_maildir" http://perfectmaildir.home-dn.net/)
2007 Mar 15
20
C''mon ARC, stay small...
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06 (update 3). All file IO is mmap(file), read memory segment, unmap, close. Tweaked the arc size down via mdb to 1GB. I used that value because c_min was also 1GB, and I was not sure if c_max could be larger than c_min....Anyway, I set c_max to 1GB. After a workload run....: > arc::print -tad { . . . ffffffffc02e29e8