-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I''ve been unable to find anything definitive about what happens if I use RAID0 to join an SSD and HDD together with respect to performance (latency, throughput). The future is obvious (hot data tracking, using most appropriate device for the data, data migration). In my specific case I have a 250GB SSD and a 500GB HDD, and about 250GB of files (constantly growing). One message I saw said that new blocks are allocated on the device with the most free space which implies the SSD would be virtually unused in my case, except for metadata which would only be used half the time. At the moment I have two independent filesystems (one per device) and manually move data files between them using symlinks to keep pathnames the same. This requires keeping lots of slop free space on the SSD as well as administration whenever it runs out of space. My hope would be overall performance between that of the two devices, and closer to that of the SSD. Roger -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iEYEARECAAYFAlEI54kACgkQmOOfHg372QR1HwCfROJ10FAC51V0wuLSRwPq0LSL 2GwAmQF1F2k3cthGThEbf67Xn3usKS1K =HFi8 -----END PGP SIGNATURE----- -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jan 30, 2013 at 01:27:37AM -0800, Roger Binns wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I''ve been unable to find anything definitive about what happens if I use > RAID0 to join an SSD and HDD together with respect to performance > (latency, throughput). The future is obvious (hot data tracking, using > most appropriate device for the data, data migration). > > In my specific case I have a 250GB SSD and a 500GB HDD, and about 250GB of > files (constantly growing). One message I saw said that new blocks are > allocated on the device with the most free space which implies the SSD > would be virtually unused in my case, except for metadata which would only > be used half the time.That would be the case with "single" mode, not with RAID-0. With RAID-0, you''d get data striped equally across all (in this case, both) the devices, up to the size of the second-largest one, at which point it''ll stop allocating space.> At the moment I have two independent filesystems (one per device) and > manually move data files between them using symlinks to keep pathnames the > same. This requires keeping lots of slop free space on the SSD as well as > administration whenever it runs out of space. > > My hope would be overall performance between that of the two devices, and > closer to that of the SSD.We don''t have any kind of hot-data management yet, but it''s on the list of things we''d like to have at some point. Hugo. -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk == PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- Some days, it''s just not worth gnawing through the straps. ---
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 30/01/13 02:02, Hugo Mills wrote:> On Wed, Jan 30, 2013 at 01:27:37AM -0800, Roger Binns wrote: >> In my specific case I have a 250GB SSD and a 500GB HDD, and about >> 250GB of files (constantly growing). One message I saw said that new >> blocks are allocated on the device with the most free space which >> implies the SSD would be virtually unused in my case, except for >> metadata which would only be used half the time. > > That would be the case with "single" mode, not with RAID-0.Ah, I hadn''t realised there was a major difference.> With RAID-0, you''d get data striped equally across all (in this case, > both) the devices, up to the size of the second-largest one, at which > point it''ll stop allocating space.By "stop allocating space" I assume you mean it will return out of space errors, even though there is technically 250GB of unused space. I presume there is no way to say that RAID-0 should be used where possible and then fallback to "single" for the remaining space. It looks like my choices are: * RAID 0 and getting 500GB of usable space, with performance 50% of the accesses at HDD levels and 50% at SSD levels * Single and getting 750GB of usable space with performance and usage mostly on the HDD> We don''t have any kind of hot-data management yet, but it''s on the list > of things we''d like to have at some point.I''m happy to wait till it is available. btrfs has been beneficial to me in so many other respects (eg checksums, compression, online everything, not having to deal with LVM and friends). I was just hoping that joining an SSD and HDD would be somewhat worthwhile now even if it isn''t close to what hot data will deliver in the future. Roger -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iEYEARECAAYFAlEI+qUACgkQmOOfHg372QT/pwCfd0UiGGlQpIjCBtCpysPZtGEs wEQAoNVIzFIkPp/EzHTDDaD9RD178dkB =VUqP -----END PGP SIGNATURE----- -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Roger Binns wrote (ao):> I''m happy to wait till it is available. btrfs has been beneficial to > me in so many other respects (eg checksums, compression, online > everything, not having to deal with LVM and friends). I was just > hoping that joining an SSD and HDD would be somewhat worthwhile now > even if it isn''t close to what hot data will deliver in the future.Do you know about bcache and EnhanceIO ? http://bcache.evilpiepirate.org/ and https://github.com/stec-inc/EnhanceIO Sander -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Jan 30, 2013, at 3:02 AM, Hugo Mills <hugo@carfax.org.uk> wrote:> > That would be the case with "single" mode, not with RAID-0. > > With RAID-0, you''d get data striped equally across all (in this > case, both) the devices, up to the size of the second-largest one, at > which point it''ll stop allocating space.This raises a question about the desirability/feasibility of changing this behavior. It''s common to have odd sized disks. It''s unfortunate that most of the life of a ''single'' paring of disks, there is no performance improvement possible; and also unfortunate that in ''raid0'' it doesn''t fall back to ''single'' behavior to fill up the remaining space, instead of ending allocation. md raid0 will work on odd sized block devices, and it will fill up all the space. Presumably it does this by allocating chunks round robbin, and the point where a block device is full, it just starts allocating more chunks to the device(s) that have space. This means there''s a distinction in behavior between md''s level ''raid10'' and separately creating "a stripe of mirrors", i.e. first creating raid1 arrays, then striping them with raid0. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi, On Wed, Jan 30, 2013 at 2:49 AM, Roger Binns <rogerb@rogerbinns.com> wrote:> It looks like my choices are: > > * RAID 0 and getting 500GB of usable space, with performance 50% of the > accesses at HDD levels and 50% at SSD levels > > * Single and getting 750GB of usable space with performance and usage > mostly on the HDDYou could try something like "-l=linear" on md-raid or something similar on LVM to build a 750GB volume where the first 250GB are the SSD and the last 500GB are the HDD. But that would probably work best (as in, use more blocks from the beginning of the disk before moving to the end) with a non-COW filesystem like ext4 instead of Btrfs (although I can be wrong about that, I never really tried something similar.) Cheers, Filipe -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 30/01/13 04:01, Sander wrote:> Do you know about bcache and EnhanceIO ?Yes, but there are two reasons I don''t use them. One is that the capacity of your cache is not included in the filesystem - ie with a 250GB SSD and 500GB the filesystem capacity will be 500GB not 750GB. The second is that I use btrfs for my root filesystem so I''d have to get bcache/EnhanceIO integrated into the distributor''s initramfs build mechanism, as well as worry about livecd/network boots without it. This is a lot of unnecessary work and worry. Roger -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iEYEARECAAYFAlEJfUgACgkQmOOfHg372QR42wCfUV9MK6luScTtu59g4p9BsTdf 6/8AoLlumP6NeEsSv/pmgd+857m/2LUF =Eigx -----END PGP SIGNATURE----- -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 30/01/13 11:10, Filipe Brandenburger wrote:> You could try something like "-l=linear" on md-raid or something > similar on LVM to build a 750GB volumeThat would also require wiping the filesystems and starting again(*). One of the joys of btrfs has been not dealing with LVM. On my workstation I have two 2GB disks, but on one there is a sizeable Windows partition. Getting LVM to stripe across the common sized space and then just use the rest took quite a while to work out, requires running several different commands and was something I had to write down. There was nothing intuitive. It was a happy day when I could wipe and replace with btrfs. Contrast with btrfs where ''btrfs --help'' is almost always sufficient and adding/removing/resizing is trivial (and online). (*) I realise I could do things like add an external disk, btrfs add that and then btrfs delete the internals, redo the internal storage, btrfs add those back and then btrfs delete the external. It would take a long time, and is a reminder as to why I would prefer to be all btrfs everywhere rather than also dealing with LVM and similar. Roger -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iEYEARECAAYFAlEJgCEACgkQmOOfHg372QSwEwCdG5GDUC2Ab/eVZo36t3Zs691R otAAn3p4Gq8lV2NgPp79799BflBwt/cW =yl2B -----END PGP SIGNATURE----- -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
>> With RAID-0, you''d get data striped equally across all (in this case, >> both) the devices, up to the size of the second-largest one, at which >> point it''ll stop allocating space. > By "stop allocating space" I assume you mean it will return out of space > errors, even though there is technically 250GB of unused space. I presume > there is no way to say that RAID-0 should be used where possible and then > fallback to "single" for the remaining space.There was a proposition to change the allocator, so it would fall back to single: http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg14517.html I stumbled on it when faced with a different problem: I could not use btrfs in RAID1 degraded mode, because it would refuse to allocate more space, and I could not convert it to single either, because balance needs to allocate space too. Regards -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html