Hi, Does ZFS flag blocks as bad so it knows to avoid using them in the future? During testing I had huge numbers of unrecoverable checksum errors, which I resolved by disabling write caching on the disks. After doing this, and confirming the errors had stopped occuring, I removed the test files. A few seconds after removing the test files, I noticed the used space dropped from 16GB to 11GB according to ''df'', but it did not appear to ever drop below this value. Is this just normal file system overhead (This is a raidz with 8x 500GB drives), or has ZFS not freed some of the space allocated to bad files? If ZFS is holding on to this space because it thinks it might be bad, is there a way to tell it that it is okay to use it? I am using ZFS on FreeBSD, which from what I''ve read has had minimal modification done to the source to make it work on that platform. Unfortunately my hardware for booting with is not supported by Solaris, which is where the majority of experience with ZFS is at this point. Thanks! This message posted from opensolaris.org
On Mon, Aug 27, 2007 at 10:00:10PM -0700, RL wrote:> Hi, > > Does ZFS flag blocks as bad so it knows to avoid using them in the future?No it doesn''t. This would be a really nice feature to have, but currently when ZFS tries to write to a bad sector it simply tries few times and gives up. With COW model this shouldn''t be very hard to try to use another block and mark this one as bad, but it''s not yet implemented.> During testing I had huge numbers of unrecoverable checksum errors, which I resolved by disabling write caching on the disks. > > After doing this, and confirming the errors had stopped occuring, I removed the test files. A few seconds after removing the test files, I noticed the used space dropped from 16GB to 11GB according to ''df'', but it did not appear to ever drop below this value. > > Is this just normal file system overhead (This is a raidz with 8x 500GB drives), or has ZFS not freed some of the space allocated to bad files?Can you retry your test without write cache starting from recreating the pool? -- Pawel Jakub Dawidek http://www.wheel.pl pjd at FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070828/f3039069/attachment.bin>
RL wrote:> Hi, > > Does ZFS flag blocks as bad so it knows to avoid using them in the future? > > During testing I had huge numbers of unrecoverable checksum errors, which I resolved by disabling write caching on the disks.Were the errors logged during writes, or during reads? Can you share the error messages (ASC/ASCQ)? Can you tell us what the hardware was so that we can avoid buying it? -- richard> After doing this, and confirming the errors had stopped occuring, I removed the test files. A few seconds after removing the test files, I noticed the used space dropped from 16GB to 11GB according to ''df'', but it did not appear to ever drop below this value. > > Is this just normal file system overhead (This is a raidz with 8x 500GB drives), or has ZFS not freed some of the space allocated to bad files? > > If ZFS is holding on to this space because it thinks it might be bad, is there a way to tell it that it is okay to use it? > > I am using ZFS on FreeBSD, which from what I''ve read has had minimal modification done to the source to make it work on that platform. Unfortunately my hardware for booting with is not supported by Solaris, which is where the majority of experience with ZFS is at this point. > > Thanks! > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Pawel Jakub Dawidek <pjd at FreeBSD.org> wrote:> On Mon, Aug 27, 2007 at 10:00:10PM -0700, RL wrote: > > Hi, > > > > Does ZFS flag blocks as bad so it knows to avoid using them in the future? > > No it doesn''t. This would be a really nice feature to have, but > currently when ZFS tries to write to a bad sector it simply tries few > times and gives up. With COW model this shouldn''t be very hard to try to > use another block and mark this one as bad, but it''s not yet > implemented.Bad block handling was needed before 1985, when the hardware did not support to map bad block. Even at that time, it was done in the disk dricer and not in the filesystem (except for FAT). J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily