Displaying 7 results from an estimated 7 matches for "146k".
Did you mean:
146
2010 Aug 30
1
getdents() with 4KB buffer - seems slow (Maildir, large inbox)
Hi,
I have a very large inbox (~146K mails) in Maildir format and dovecot
seems to spend a lot of time rescanning the directory, especially when
the server is loaded. I'm not sure whether this is triggered by
Thunderbird or done regularly, but it takes longer when the server is
loaded, so sometimes it seems that it is scanning con...
2003 Apr 22
1
make installworld Error code 64
...booted to the new kernel sucessfully...
root@smeagol /usr/src # uname -a
FreeBSD smeagol.purgatory 4.8-STABLE FreeBSD 4.8-STABLE #0: Sun Apr 20
17:09:30 PDT 2003 root@smeagol.purgatory:/usr/obj/usr/src/sys/GENERIC
i386
All the files appear to be in place?
root@smeagol /usr/src # ls -la
total 146k
drwxr-xr-x 21 root wheel 512 Apr 20 14:07 ./
drwxr-xr-x 18 root wheel 512 Apr 20 14:09 ../
-rw-r--r-- 1 root wheel 4.6k Sep 5 1999 COPYRIGHT
-rw-r--r-- 1 root wheel 8.3k Apr 16 02:59 Makefile
-rw-r--r-- 1 root wheel 23k Apr 6 12:5...
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If you add up the snapshot references it
appears somewhat high versus daily use (mostly mail boxes, spam, etc
changing), but say an aggregate of no more than 400+MB a
2006 May 12
1
zfs panic when unpacking open solaris source
...nute_54 624K - 1.98G -
home/cjg at minute_55 429K - 1.98G -
home/cjg at minute_56 0 - 1.98G -
home/cjg at minute_57 0 - 1.98G -
home/cjg at minute_58 0 - 1.98G -
home/cjg at minute_59 0 - 1.98G -
home/cjg at minute_00 146K - 1.98G -
home/cjg at minute_01 282K - 1.98G -
home/cjg at minute_02 218K - 1.98G -
home/cjg at minute_03 300K - 1.98G -
home/cjg at minute_04 232K - 1.98G -
home/cjg at minute_05 458K - 1.98G -
home/cjg at minute_06 462K - 1.9...
2006 Feb 24
17
Re: [nfs-discuss] bug 6344186
Joseph Little wrote:
> I''d love to "vote" to have this addressed, but apparently votes for
> bugs are no available to outsiders.
>
> What''s limiting Stanford EE''s move to using ZFS entirely for our
> snapshoting filesystems and multi-tier storage is the inability to
> access .zfs directories and snapshots in particular on NFSv3 clients.
2007 Feb 10
16
How to backup a slice ? - newbie
... though I tried, read and typed the last 4 hours; still no clue.
Please, can anyone give a clear idea on how this works:
Get the content of c0d1s1 to c0d0s7 ?
c0d1s1 is pool home and active; c0d0s7 is not active.
I have followed the suggestion on
http://www.opensolaris.org/os/community/zfs/demos/zfs_demo.pdf
% sudo zfs snapshot home at backup
% zfs list
NAME USED AVAIL REFER