A friend of mine who builds storage systems designed for HPC use has been keeping an eye on btrfs and has just done some testing of it with 2.6.36 and seems to like what he sees in terms of stability. http://scalability.org/?p=2711 # But it passed our stability test. 100 iterations (3.2TB # written/read in all and compared to checksums) of the # following fio test case. [...] # This is our baseline test. Failing RAIDs, failing drives, # failing SSDs tend not to pass this test. Borked file systems # tend not to pass this test. When something passes this test, # again and again (3rd time we’ve run it), and does so without # fail, we call it safe. He has concerns about performance, but he''s more interested in reliability in these tests. cheers! Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC This email may come with a PGP signature as a file. Do not panic. For more info see: http://en.wikipedia.org/wiki/OpenPGP
Hi, > A friend of mine who builds storage systems designed for HPC use > has been keeping an eye on btrfs and has just done some testing > of it with 2.6.36 and seems to like what he sees in terms of > stability. > > http://scalability.org/?p=2711 This is nice to see, but we should be clearer about what stability means. This was just fio testing; it doesn''t say anything about resilience to crashes, power offs, or the presence of corruption. - Chris. -- Chris Ball <cjb@laptop.org> One Laptop Per Child -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Oct 29, 2010 at 4:38 PM, Chris Samuel <chris@csamuel.org> wrote:> A friend of mine who builds storage systems designed for HPC > use has been keeping an eye on btrfs and has just done some > testing of it with 2.6.36 and seems to like what he sees in > terms of stability.That''s a *very* misleading conclusion to come to based solely on a single file I/O test. It''s more realistic to say "stable under fio load in ideal conditions". For example: No device-yanking tests were done. No power-cord yanking tests were done. No device cables were yanked, shaken, or plugged/unplugged in rapid succession. No "dd the raw device underneath the filesystem while doing file I/O" tests were done. No recovery tests were done. IOW, you can''t really say "it''s stable" across the board like that. -- Freddie Cash fjwcash@gmail.com -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
> For example: > No device-yanking tests were done. > No power-cord yanking tests were done. > No device cables were yanked, shaken, or plugged/unplugged in rapid > succession. > No "dd the raw device underneath the filesystem while doing file > I/O" tests were done. > No recovery tests were done. >Any reallife tests to show how close we are to becoming really stable ? i.e ideally I''d like to know that we''re for example 85% stable failing N tests -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 10/30/2010 05:19 PM, Freddie Cash wrote:> On Fri, Oct 29, 2010 at 4:38 PM, Chris Samuel<chris@csamuel.org> wrote: >> A friend of mine who builds storage systems designed for HPC >> use has been keeping an eye on btrfs and has just done some >> testing of it with 2.6.36 and seems to like what he sees in >> terms of stability. > > That''s a *very* misleading conclusion to come to based solely on a > single file I/O test. It''s more realistic to say "stable under fio > load in ideal conditions".Since it''s my blog post that is generating these responses, let me provide some more information. We want to see if the file system, at a basic level, works under load. We aren''t yanking power, or otherwise purposefully damaging the underlying platform during operations, as that is not what we are testing. What we''ve found is that zfs on fuse doesn''t pass these very basic tests. nilfs2 does (recent kernels anyway). btrfs does (now). Our focus for the tests were quite simple. Will the file system work when we are trying to shove GB/s down its throat. If the answer is no, then we don''t even consider looking at the "lets see how stable it is under purposefully harmful conditions" tests. If the answer is yes, that it works, then we have to ask is the performance near where we need it for it to be useful. Currently the answer to that is no. Once this changes (and I saw some posts recently from Chris M that suggests that there have been some changes in this respect for 2.6.37 time frame), then we can start looking at the broader picture of suitability for use. That latter set of issues, file system and metadata repair, stability in the face of less than ideal conditions, gets tested after we see the system able to perform where we need it to. We aren''t there yet. Its stable against the tests we ran on it, which, as noted, some other file systems (some in wide spread use) aren''t. - Joe -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html