Is there a more aggressive filesystem restorer than btrfsck? It simply gives up immediately with the following error: btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root->node)'' failed. Yet, the filesystem has plenty of data on it, and the discs are good and I didn''t do anything to the data except regular btrfs commands and normal mounting. That''s a wildly unreliable filesystem. BTW, is there a way to improve delete and copy performance of btrfs? I''m getting about 50KB/s-500KB/s (per size of file being deleted) in deleting and/or copying files on a disc that usually can go about 80MB/s. I think it''s because they were fragmented. That implies btrfs is too accepting of writing data in fragmented style when it doesn''t have to. Almost all the files on my btrfs partitions are around a gig, or 20 gigs, or a third of a gig, or stuff like that. The filesystem is 1.1TB. Brad -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jun 01, 2010 at 07:29:56PM -0700, ulmo@sonic.net wrote:> Is there a more aggressive filesystem restorer than btrfsck? It simply > gives up immediately with the following error: > > btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root->node)'' > failed.btrfsck currently only checks whether a filesystem is consistent. It doesn''t try to perform any recovery or error correction at all, so it''s mostly useful to developers. Any error handling occurs while the filesystem is mounted.> Yet, the filesystem has plenty of data on it, and the discs are good and I > didn''t do anything to the data except regular btrfs commands and normal > mounting. That''s a wildly unreliable filesystem.btrfs is under heavy development, so make sure you''re using the latest git versions of the kernel module and tools.> BTW, is there a way to improve delete and copy performance of btrfs? I''m > getting about 50KB/s-500KB/s (per size of file being deleted) in deleting > and/or copying files on a disc that usually can go about 80MB/s. I think > it''s because they were fragmented. That implies btrfs is too accepting of > writing data in fragmented style when it doesn''t have to. Almost all the > files on my btrfs partitions are around a gig, or 20 gigs, or a third of a > gig, or stuff like that. The filesystem is 1.1TB. > > Brad-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Sean Bartell <wingedtachikoma <at> gmail.com> writes:> > Is there a more aggressive filesystem restorer than btrfsck? It simply > > gives up immediately with the following error: > > > > btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root->node)'' > > failed. > > btrfsck currently only checks whether a filesystem is consistent. It > doesn''t try to perform any recovery or error correction at all, so it''s > mostly useful to developers. Any error handling occurs while the > filesystem is mounted. >Is there any plan to implement this functionality. It would seem to me to be a pretty basic feature that is missing ? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Rodrigo E. De León Plicet
2010-Jun-29 02:31 UTC
Re: Is there a more aggressive fixer than btrfsck?
On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski <dan.kozlowski@gmail.com> wrote:> Sean Bartell <wingedtachikoma <at> gmail.com> writes: > >> > Is there a more aggressive filesystem restorer than btrfsck? It simply >> > gives up immediately with the following error: >> > >> > btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root->node)'' >> > failed. >> >> btrfsck currently only checks whether a filesystem is consistent. It >> doesn''t try to perform any recovery or error correction at all, so it''s >> mostly useful to developers. Any error handling occurs while the >> filesystem is mounted. >> > > Is there any plan to implement this functionality. It would seem to me to be a > pretty basic feature that is missing ?If Btrfs aims to be at least half of what ZFS is, then it will not impose a need for fsck at all. Read "No, ZFS really doesn''t need a fsck" at the following URL: http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.html -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet <rdeleonp@gmail.com> wrote:> On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski > <dan.kozlowski@gmail.com> wrote: >> Sean Bartell <wingedtachikoma <at> gmail.com> writes: >> >>> > Is there a more aggressive filesystem restorer than btrfsck? It simply >>> > gives up immediately with the following error: >>> > >>> > btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root->node)'' >>> > failed. >>> >>> btrfsck currently only checks whether a filesystem is consistent. It >>> doesn''t try to perform any recovery or error correction at all, so it''s >>> mostly useful to developers. Any error handling occurs while the >>> filesystem is mounted. >>> >> >> Is there any plan to implement this functionality. It would seem to me to be a >> pretty basic feature that is missing ? > > If Btrfs aims to be at least half of what ZFS is, then it will not > impose a need for fsck at all. > > Read "No, ZFS really doesn''t need a fsck" at the following URL: > > http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.html >Interesting idea. it would seem to me however that the functionality described in that article is more concerned with a bad transaction rather then something like a hardware failure where a block written more then 128 transactions ago is now corrupted and consiquently the entire partition is now unmountable( that is what I think i am looking at with BTRFS ) -- S.D.G. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tuesday 29 June 2010 12:37:45 Daniel Kozlowski wrote:> On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet > > <rdeleonp@gmail.com> wrote: > > On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski > > > > <dan.kozlowski@gmail.com> wrote: > >> Sean Bartell <wingedtachikoma <at> gmail.com> writes: > >>> > Is there a more aggressive filesystem restorer than btrfsck? It > >>> > simply gives up immediately with the following error: > >>> > > >>> > btrfsck: disk-io.c:739: open_ctree_fd: Assertion > >>> > `!(!tree_root->node)'' failed. > >>> > >>> btrfsck currently only checks whether a filesystem is consistent. It > >>> doesn''t try to perform any recovery or error correction at all, so it''s > >>> mostly useful to developers. Any error handling occurs while the > >>> filesystem is mounted. > >> > >> Is there any plan to implement this functionality. It would seem to me > >> to be a pretty basic feature that is missing ? > > > > If Btrfs aims to be at least half of what ZFS is, then it will not > > impose a need for fsck at all. > > > > Read "No, ZFS really doesn''t need a fsck" at the following URL: > > > > http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.h > > tml > > Interesting idea. it would seem to me however that the functionality > described in that article is more concerned with a bad transaction > rather then something like a hardware failure where a block written > more then 128 transactions ago is now corrupted and consiquently the > entire partition is now unmountable( that is what I think i am looking > at with BTRFS )Still, the FS alone should be able to recover from such situations. With multiple superblocks the probability that the fs is unmountable is very small and if all superblocks are corrupted then you need a data recovery prorgram, not fsck. -- Hubert Kario QBS - Quality Business Software 02-656 Warszawa, ul. Ksawerów 30/85 tel. +48 (22) 646-61-51, 646-74-24 www.qbs.com.pl -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jun 29, 2010 at 3:37 AM, Daniel Kozlowski <dan.kozlowski@gmail.com> wrote:> On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet > <rdeleonp@gmail.com> wrote: >> On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski >> <dan.kozlowski@gmail.com> wrote: >>> Sean Bartell <wingedtachikoma <at> gmail.com> writes: >>> >>>> > Is there a more aggressive filesystem restorer than btrfsck? It simply >>>> > gives up immediately with the following error: >>>> > >>>> > btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root->node)'' >>>> > failed. >>>> >>>> btrfsck currently only checks whether a filesystem is consistent. It >>>> doesn''t try to perform any recovery or error correction at all, so it''s >>>> mostly useful to developers. Any error handling occurs while the >>>> filesystem is mounted. >>>> >>> >>> Is there any plan to implement this functionality. It would seem to me to be a >>> pretty basic feature that is missing ? >> >> If Btrfs aims to be at least half of what ZFS is, then it will not >> impose a need for fsck at all. >> >> Read "No, ZFS really doesn''t need a fsck" at the following URL: >> >> http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.html >> > > Interesting idea. it would seem to me however that the functionality > described in that article is more concerned with a bad transaction > rather then something like a hardware failure where a block written > more then 128 transactions ago is now corrupted and consiquently the > entire partition is now unmountable( that is what I think i am looking > at with BTRFS )In the ZFS case, this is handled by checksumming and redundant data, and can be discovered (and fixed) via either reading the affected data block (in which case, the checksum is wrong, the data is read from a redundant data block, and the correct data is written over the incorrect data) or by running a scrub. Self-healing, checksumming, data redundancy eliminate the need for online (or offline) fsck. Automatic transaction rollback at boot eliminates the need for fsck at boot, as there is no such thing as "a dirty filesystem". Either the data is on disk and correct, or it doesn''t exist. Yes, you may lose data. But you will never have a corrupted filesystem. Not sure how things work for btrfs. -- Freddie Cash fjwcash@gmail.com -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jun 29, 2010 at 02:36:14PM -0700, Freddie Cash wrote:> On Tue, Jun 29, 2010 at 3:37 AM, Daniel Kozlowski > <dan.kozlowski@gmail.com> wrote: > > On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet > > <rdeleonp@gmail.com> wrote: > >> On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski > >> <dan.kozlowski@gmail.com> wrote: > >>> Sean Bartell <wingedtachikoma <at> gmail.com> writes: > >>> > >>>> > Is there a more aggressive filesystem restorer than btrfsck? It simply > >>>> > gives up immediately with the following error: > >>>> > > >>>> > btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root->node)'' > >>>> > failed. > >>>> > >>>> btrfsck currently only checks whether a filesystem is consistent. It > >>>> doesn''t try to perform any recovery or error correction at all, so it''s > >>>> mostly useful to developers. Any error handling occurs while the > >>>> filesystem is mounted. > >>>> > >>> > >>> Is there any plan to implement this functionality. It would seem to me to be a > >>> pretty basic feature that is missing ? > >> > >> If Btrfs aims to be at least half of what ZFS is, then it will not > >> impose a need for fsck at all. > >> > >> Read "No, ZFS really doesn''t need a fsck" at the following URL: > >> > >> http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.html > >> > > > > Interesting idea. it would seem to me however that the functionality > > described in that article is more concerned with a bad transaction > > rather then something like a hardware failure where a block written > > more then 128 transactions ago is now corrupted and consiquently the > > entire partition is now unmountable( that is what I think i am looking > > at with BTRFS ) > > In the ZFS case, this is handled by checksumming and redundant data, > and can be discovered (and fixed) via either reading the affected data > block (in which case, the checksum is wrong, the data is read from a > redundant data block, and the correct data is written over the > incorrect data) or by running a scrub. > > Self-healing, checksumming, data redundancy eliminate the need for > online (or offline) fsck. > > Automatic transaction rollback at boot eliminates the need for fsck at > boot, as there is no such thing as "a dirty filesystem". Either the > data is on disk and correct, or it doesn''t exist. Yes, you may lose > data. But you will never have a corrupted filesystem. > > Not sure how things work for btrfs.btrfs works in a similar way. While it''s writing new data, it keeps the superblock pointing at the old data, so after a crash you still get the complete old version. Once the new data is written, the superblock is updated to point at it, ensuring that you see the new data. This eliminates the need for any special handling after a crash. btrfs also uses checksums and redundancy to protect against data corruption. Thanks to its design, btrfs doesn''t need to scan the filesystem or cross-reference structures to detect problems. It can easily detect corruption at run-time when it tries to read the problematic data, and fixes it using the redundant copies. In the event that something goes horribly wrong, for example if each copy of the superblock or of a tree root is corrupted, you could still find some valid nodes and try to piece them together; however, this is rare and falls outside the scope of a fsck anyway. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, 29 Jun 2010 18:34:13 +0200, Hubert Kario <hka@qbs.com.pl> wrote:> On Tuesday 29 June 2010 12:37:45 Daniel Kozlowski wrote: >> On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet >> >> <rdeleonp@gmail.com> wrote: >> > On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski >> > >> > <dan.kozlowski@gmail.com> wrote: >> >> Sean Bartell <wingedtachikoma <at> gmail.com> writes: >> >>> > Is there a more aggressive filesystem restorer than btrfsck? It >> >>> > simply gives up immediately with the following error: >> >>> > >> >>> > btrfsck: disk-io.c:739: open_ctree_fd: Assertion >> >>> > `!(!tree_root->node)'' failed. >> >>> >> >>> btrfsck currently only checks whether a filesystem is consistent.It>> >>> doesn''t try to perform any recovery or error correction at all, so >> >>> it''s >> >>> mostly useful to developers. Any error handling occurs while the >> >>> filesystem is mounted. >> >> >> >> Is there any plan to implement this functionality. It would seem tome>> >> to be a pretty basic feature that is missing ? >> > >> > If Btrfs aims to be at least half of what ZFS is, then it will not >> > impose a need for fsck at all. >> > >> > Read "No, ZFS really doesn''t need a fsck" at the following URL: >> > >> >http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.h>> > tml >> >> Interesting idea. it would seem to me however that the functionality >> described in that article is more concerned with a bad transaction >> rather then something like a hardware failure where a block written >> more then 128 transactions ago is now corrupted and consiquently the >> entire partition is now unmountable( that is what I think i am looking >> at with BTRFS ) > > Still, the FS alone should be able to recover from such situations. With> multiple superblocks the probability that the fs is unmountable is very > small > and if all superblocks are corrupted then you need a data recovery > prorgram, > not fsck.On Tue, 29 Jun 2010 18:34:13 +0200, Hubert Kario <hka@qbs.com.pl> wrote:> On Tuesday 29 June 2010 12:37:45 Daniel Kozlowski wrote: >> On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet >> >> <rdeleonp@gmail.com> wrote: >> > On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski >> > >> > <dan.kozlowski@gmail.com> wrote: >> >> Sean Bartell <wingedtachikoma <at> gmail.com> writes: >> >>> > Is there a more aggressive filesystem restorer than btrfsck? It >> >>> > simply gives up immediately with the following error: >> >>> > >> >>> > btrfsck: disk-io.c:739: open_ctree_fd: Assertion >> >>> > `!(!tree_root->node)'' failed. >> >>> >> >>> btrfsck currently only checks whether a filesystem is consistent.It>> >>> doesn''t try to perform any recovery or error correction at all, so >> >>> it''s >> >>> mostly useful to developers. Any error handling occurs while the >> >>> filesystem is mounted. >> >> >> >> Is there any plan to implement this functionality. It would seem tome>> >> to be a pretty basic feature that is missing ? >> > >> > If Btrfs aims to be at least half of what ZFS is, then it will not >> > impose a need for fsck at all. >> > >> > Read "No, ZFS really doesn''t need a fsck" at the following URL: >> > >> >http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.h>> > tml >> >> Interesting idea. it would seem to me however that the functionality >> described in that article is more concerned with a bad transaction >> rather then something like a hardware failure where a block written >> more then 128 transactions ago is now corrupted and consiquently the >> entire partition is now unmountable( that is what I think i am looking >> at with BTRFS ) > > Still, the FS alone should be able to recover from such situations. With> multiple superblocks the probability that the fs is unmountable is very > small > and if all superblocks are corrupted then you need a data recovery > prorgram, > not fsck.While it would be great to have a filesystem that can recover from such situations, or at least fail gracefully, I''d also like to be able to verify/repair the filesystem offline, without mounting it and potentially making things worse. For example, say you have a single-disk filesystem, and while it can detect corruption it can''t repair it. That''s the sort of scenario where you want to specify what to do, interactively or with command line options. I don''t want the only choice to be bringing it online and destructively forcing it into a consistent state based on variables I don''t control like when someone attempts to access the file. Regards, -Anthony -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Jun 29, 2010 at 06:22:43PM -0400, Sean Bartell wrote:> On Tue, Jun 29, 2010 at 02:36:14PM -0700, Freddie Cash wrote: > > On Tue, Jun 29, 2010 at 3:37 AM, Daniel Kozlowski > > <dan.kozlowski@gmail.com> wrote: > > > On Mon, Jun 28, 2010 at 10:31 PM, Rodrigo E. De León Plicet > > > <rdeleonp@gmail.com> wrote: > > >> On Mon, Jun 28, 2010 at 8:48 AM, Daniel Kozlowski > > >> <dan.kozlowski@gmail.com> wrote: > > >>> Sean Bartell <wingedtachikoma <at> gmail.com> writes: > > >>> > > >>>> > Is there a more aggressive filesystem restorer than btrfsck? It simply > > >>>> > gives up immediately with the following error: > > >>>> > > > >>>> > btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root->node)'' > > >>>> > failed. > > >>>> > > >>>> btrfsck currently only checks whether a filesystem is consistent. It > > >>>> doesn''t try to perform any recovery or error correction at all, so it''s > > >>>> mostly useful to developers. Any error handling occurs while the > > >>>> filesystem is mounted. > > >>>> > > >>> > > >>> Is there any plan to implement this functionality. It would seem to me to be a > > >>> pretty basic feature that is missing ? > > >> > > >> If Btrfs aims to be at least half of what ZFS is, then it will not > > >> impose a need for fsck at all.Everyone needs an fsck. Yan Zheng is working on a more complete fsck right now, and making good progress ;) The fsck is really for emergencies only, you won''t have to run it after a crash or anything. It''s for when you notice things have gone wrong and just want your data back. Over the long term we''ll push more and more of the fsck into online operations. -chris> > >> > > >> Read "No, ZFS really doesn''t need a fsck" at the following URL: > > >> > > >> http://www.c0t0d0s0.org/archives/6071-No,-ZFS-really-doesnt-need-a-fsck.html > > >> > > > > > > Interesting idea. it would seem to me however that the functionality > > > described in that article is more concerned with a bad transaction > > > rather then something like a hardware failure where a block written > > > more then 128 transactions ago is now corrupted and consiquently the > > > entire partition is now unmountable( that is what I think i am looking > > > at with BTRFS ) > > > > In the ZFS case, this is handled by checksumming and redundant data, > > and can be discovered (and fixed) via either reading the affected data > > block (in which case, the checksum is wrong, the data is read from a > > redundant data block, and the correct data is written over the > > incorrect data) or by running a scrub. > > > > Self-healing, checksumming, data redundancy eliminate the need for > > online (or offline) fsck. > > > > Automatic transaction rollback at boot eliminates the need for fsck at > > boot, as there is no such thing as "a dirty filesystem". Either the > > data is on disk and correct, or it doesn''t exist. Yes, you may lose > > data. But you will never have a corrupted filesystem. > > > > Not sure how things work for btrfs. > > btrfs works in a similar way. While it''s writing new data, it keeps the > superblock pointing at the old data, so after a crash you still get the > complete old version. Once the new data is written, the superblock is > updated to point at it, ensuring that you see the new data. This > eliminates the need for any special handling after a crash. > > btrfs also uses checksums and redundancy to protect against data > corruption. Thanks to its design, btrfs doesn''t need to scan the > filesystem or cross-reference structures to detect problems. It can > easily detect corruption at run-time when it tries to read the > problematic data, and fixes it using the redundant copies. > > In the event that something goes horribly wrong, for example if each > copy of the superblock or of a tree root is corrupted, you could still > find some valid nodes and try to piece them together; however, this is > rare and falls outside the scope of a fsck anyway. > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Rodrigo E. De León Plicet
2010-Jul-01 03:38 UTC
Re: Is there a more aggressive fixer than btrfsck?
On Wed, Jun 30, 2010 at 11:47 AM, Florian Weimer <fw@deneb.enyo.de> wrote:> ZFS doesn''t need a fsck because you have throw away the file system > and restore from backup for certain types of corruption: > > | What can I do if ZFS file system panics on every boot? > [...] > | This will remove all knowledge of pools from your system. You will > | have to re-create your pool and restore from backup. > > <http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq#HWhatcanIdoifZFSfilesystempanicsoneveryboot>They *do* make it clear when could something like that happen. From the same URL: "ZFS is designed to survive arbitrary hardware failures through the use of redundancy (mirroring or RAID-Z). Unfortunately, certain failures in *non-replicated* configurations can cause ZFS to panic when trying to load the pool. This is a bug, and will be fixed in the near future (along with several other nifty features, such as background scrubbing)." "Non-replicated configuration" boils down to no mirroring or parity checking (basically RAID-0 or similar); such a thing implies: - No redundancy. - No fault tolerance. So, yeah, I guess if you go for a "non-replicated configuration", there will be risks, whether you use ZFS, btrfs, MD+LVM+$ANY_TRADITIONAL_FS, etc. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jun 30, 2010 at 10:38:18PM -0500, Rodrigo E. De León Plicet wrote:> On Wed, Jun 30, 2010 at 11:47 AM, Florian Weimer <fw@deneb.enyo.de> wrote: > > ZFS doesn''t need a fsck because you have throw away the file system > > and restore from backup for certain types of corruption: > > > > | What can I do if ZFS file system panics on every boot? > > [...] > > | This will remove all knowledge of pools from your system. You will > > | have to re-create your pool and restore from backup. > > > > <http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq#HWhatcanIdoifZFSfilesystempanicsoneveryboot> > > > They *do* make it clear when could something like that happen. > > >From the same URL: > > "ZFS is designed to survive arbitrary hardware failures through the > use of redundancy (mirroring or RAID-Z). Unfortunately, certain > failures in *non-replicated* configurations can cause ZFS to panic > when trying to load the pool. This is a bug, and will be fixed in the > near future (along with several other nifty features, such as > background scrubbing)." > > "Non-replicated configuration" boils down to no mirroring or parity > checking (basically RAID-0 or similar); such a thing implies:Memory corruptions and other problems make it impossible to catch every single class of error on commodity hardware. fsck is the last line of defense between you and restoring from backup, especially if the size of the corruption is relatively small in a large FS. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html