Displaying 9 results from an estimated 9 matches for "jmlittl".
Did you mean:
jslittl
2008 May 27
6
slog devices don''t resilver correctly
This past weekend, but holiday was ruined due to a log device
"replacement" gone awry.
I posted all about it here:
http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html
In a nutshell, an resilver of a single log device with itself, due to
the fact one can''t remove a log device from a pool once defined, cause
ZFS to fully resilver but then attach the log device as as stripe to
the volume, and no lon...
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause for seconds (5-30 or so). This on B70 bits.
I''ve gotten used to this behavior over NFS, but
2009 Feb 04
26
ZFS snapshot splitting & joining
Hello everyone,
I am trying to take ZFS snapshots (ie. zfs send) and burn them to DVD''s for offsite storage. In many cases, the snapshots greatly exceed the 8GB I can stuff onto a single DVD-DL.
In order to make this work, I have used the "split" utility to break the images into smaller, fixed-size chunks that will fit onto a DVD. For example:
#split -b8100m
2006 Mar 28
43
zfs and backup applications
Hi,
I was wondering if there have been any conversations with backup vendors like Veritas or EMC regarding better integration with ZFS. While I understand they can use the "native" mode of reading files from the filesystem, it would be great if there were agents that had options like making a snapshot and storing a "zfs backup" datastream that could be used for zfs restore.
2008 Apr 17
0
zfs mount i/o error and workarounds
Hello list,
We discovered a failed disk with checksum errors. Took out the disk
and resilvered, which reported many errors. A few of my subvolumes to
the pool won''t mount anymore, with "zfs import poolname" reporting
that "cannot mount ''poolname/proj'': I/O error"
Ok, we have a problem. I can successfully clone any snapshot of
2006 Jun 02
0
zfs going out to lunch
I''ve been writing via tar to a pool some stuff from backup, around
500GB. Its taken quite a while as the tar is being read from NFS. My
ZFS partition in this case is a RAIDZ 3-disk job using 3 400GB SATA
drives (sil3124 card)
Ever once in a while, a "df" stalls and during that time my io''s go
flat, as in :
capacity operations bandwidth
pool
2006 Sep 11
1
Looking for common dtrace scripts for NFS top talkers
We started seeing odd behaviour with clients somehow hammering our
ZFS-based NFS server. Nothing is obvious from mpstat/iostat/etc. I''ve
seen mention before of NFSv3 client dtrace scripts, and I was
wondering if there ever was one for the server end, displaying top
talkers, writes/reads, or locations of such to nail down abusive
clients short of using snoop/tcpdump to nail down via
2006 Mar 29
3
ON 20060327 and upcoming solaris 10 U2 / coreutils
So, I noticed that a lot of the fixes discussed here recently,
including the ZFS/NFS interaction bug fixes and the deadlock fix has
made it into 20060327 that was released this morning. My question is
whether we''ll see all these up to the minute bug fixes in the Solaris
10 update that brings ZFS to that product, or if there is a specific
date where no further updates will make it in to
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If you add up the snapshot references it
appears somewhat high versus daily use (mostly mail boxes, spam, etc
changing), but say an aggregate of no more than 400+MB a