This may have been covered somewhere but I couldn''t find it. Is it possible to mirror two raidz vdevs? Like a RAID50 basically. -- This message posted from opensolaris.org
Hi,> Is it possible to mirror two raidz vdevs? Like a RAID50 basically.Raid 50 is striped... basically: zpool create tank raidz c0t0d0 c0t0d1 c0t0d2 raidz c1t0d0 c1t0d1 c0t0d2 Other than that, I believe it is not possible to create a mirrored pool from raidz vdevs Regards, Serge Fonville -- http://www.sergefonville.nl Convince Google!! They need to support Adsense over SSL https://www.google.com/adsense/support/bin/answer.py?hl=en&answer=10528 http://www.google.com/support/forum/p/AdSense/thread?tid=1884bc9310d9f923&hl=en
On Mon, 26 Jul 2010, Dav Banks wrote:> This may have been covered somewhere but I couldn''t find it. > > Is it possible to mirror two raidz vdevs? Like a RAID50 basically.This config is not supported by zfs. It should be possible to do though if you are really serious about it. You can create two zfs zvols (volumes) which are hopefully in two different raidz-based zfs pools, and then create a new zfs pool using those two devices. The end result would be three zfs pools. It is probably not a wise idea to use this layered approach. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Ah. Thanks! I should have said RAID51 - a mirror of RAID5 elements. Thanks for the info. Bummer that it can''t be done. -- This message posted from opensolaris.org
> -----Original Message----- > From: zfs-discuss-bounces at opensolaris.org > [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Dav Banks > Sent: Monday, July 26, 2010 2:02 PM > To: zfs-discuss at opensolaris.org > Subject: [zfs-discuss] Mirrored raidz > > This may have been covered somewhere but I couldn''t find it. > > Is it possible to mirror two raidz vdevs? Like a RAID50 basically.RAID50 is not a mirror of RAID5s, but a stripset of RAID5s. RAID50 is analogous to multiple raidz vdevs in a single zpool. Mirrored RAID5s are not directly possible, as ZFS does not permit nested vdevs (i.e. a mirror vdev composed of raidz vdevs). I think you can make 2 separate zpools composed of single raidz vdevs, make zvols in those, then create a 3rd zpool with a mirror vdev of the zvols. -Will
On Mon, July 26, 2010 14:17, Dav Banks wrote:> Ah. Thanks! I should have said RAID51 - a mirror of RAID5 elements. > > Thanks for the info. Bummer that it can''t be done.Out of curiosity, any particular reason why you want to do this?
A small follow-up is that creating pools from components of other pools can cause system deadlocks. This approach is not recommended. Thanks, Cindy On 07/26/10 12:19, Saxon, Will wrote:>> -----Original Message----- >> From: zfs-discuss-bounces at opensolaris.org >> [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Dav Banks >> Sent: Monday, July 26, 2010 2:02 PM >> To: zfs-discuss at opensolaris.org >> Subject: [zfs-discuss] Mirrored raidz >> >> This may have been covered somewhere but I couldn''t find it. >> >> Is it possible to mirror two raidz vdevs? Like a RAID50 basically. > > RAID50 is not a mirror of RAID5s, but a stripset of RAID5s. RAID50 is analogous to multiple raidz vdevs in a single zpool. > > Mirrored RAID5s are not directly possible, as ZFS does not permit nested vdevs (i.e. a mirror vdev composed of raidz vdevs). > > I think you can make 2 separate zpools composed of single raidz vdevs, make zvols in those, then create a 3rd zpool with a mirror vdev of the zvols. > > -Will > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I wanted to test it as a backup solution. Maybe that''s crazy in itself but I want to try it. Basically, once a week detach the ''backup'' pool from the mirror, replace the drives, add the new raidz to the mirror and let it resilver and sit for a week. -- This message posted from opensolaris.org
On 26 Jul 2010, at 19:51, Dav Banks <davbanks at virginia.edu> wrote:> I wanted to test it as a backup solution. Maybe that''s crazy in itself but I want to try it. > > Basically, once a week detach the ''backup'' pool from the mirror, replace the drives, add the new raidz to the mirror and let it resilver and sit for a week.Why not do it the other way around? Create a pool which consists of mirrored pairs (or triples) of drives. You don''t need raidz to make it appear that the pool is bigger and it will use disks in the pool appropriately. If you want to have more copies of data, set copies=2 and zfs will try to schedule writes across different mirrored pairs. Alex
On Jul 26, 2010, at 2:51 PM, Dav Banks <davbanks at virginia.edu> wrote:> I wanted to test it as a backup solution. Maybe that''s crazy in itself but I want to try it. > > Basically, once a week detach the ''backup'' pool from the mirror, replace the drives, add the new raidz to the mirror and let it resilver and sit for a week.If that''s the case why not create a second pool called ''backup'' and ''zfs send'' periodically to the backup pool? -Ross
You might look at the zpool split feature, where you can split off the disks from a mirrored pool to create an identical pool, described here: http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs ZFS Admin Guide, p. 87 Thanks, Cindy On 07/26/10 12:51, Dav Banks wrote:> I wanted to test it as a backup solution. Maybe that''s crazy in itself but I want to try it. > > Basically, once a week detach the ''backup'' pool from the mirror, replace the drives, add the new raidz to the mirror and let it resilver and sit for a week.
On Mon, July 26, 2010 14:51, Dav Banks wrote:> I wanted to test it as a backup solution. Maybe that''s crazy in itself but > I want to try it. > > Basically, once a week detach the ''backup'' pool from the mirror, replace > the drives, add the new raidz to the mirror and let it resilver and sit > for a week.While a neat solution, I think you''d be better off using incremental send/recv functionality for backups. Having an online "backup" really isn''t a true backup IMHO. It''s too easy to fat finger something and then you''re hosed as the change was replicated in real-time to both sides of the mirror (though this is mitigated a bit if you automatically take regular snapshots). Mirroring is (IMHO) for up time and insurance against hardware failure. Backups are /independent/ copies of data that are insurance something happening to your primary copy. You could do the same thing with a separate pool and send/recv, without taking the hit on write IOps from the second half of the mirror: basically async replication instead of synchronous.
On Mon, Jul 26 at 11:51, Dav Banks wrote:> I wanted to test it as a backup solution. Maybe that''s crazy in > itself but I want to try it. > > Basically, once a week detach the ''backup'' pool from the mirror, > replace the drives, add the new raidz to the mirror and let it > resilver and sit for a week.Since you''re already "spending" the disk drives for this that get detached, it seems safer to me to just ''zfs send'' to a minimal backup system, and remove the extra drives from your primary server. Less overhead and the scrub can validate your backup copy at whatever frequency you choose. You don''t even need the same pool layout on the backup machine. Primary can be a stripe of mirrors, while your backup can be a wide raidz2 setup. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
> It should be possible to do >though if you are really serious about it. You can create two zfs >zvols (volumes) which are hopefully in two different raidz-based zfs >pools, and then create a new zfs pool using those two devices. The >end result would be three zfs pools. It is probably not a wise idea >to use this layered approach.>A small follow-up is that creating pools from components of other pools >can cause system deadlocks.One can make the zvols iSCSI targets and then attach them to the local initiator. This works and, indeed, it''s a way to mirror storage across a network. -- Maurice Volaski, maurice.volaski at einstein.yu.edu Computing Support, Rose F. Kennedy Center Albert Einstein College of Medicine of Yeshiva University
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Ross Walker > > If that''s the case why not create a second pool called ''backup'' and > ''zfs send'' periodically to the backup pool?+1 This is what I do.
The reason for wanting raidz was to have some redundancy in the backup without the big hit on space that duplicating the data would have. The other issue is the switching process. More likely to have screwups if every week I, or someone else when I''m out, have to break and reset 24 mirrors instead of just one. I do need to look more at the copies property though. That could be useful in some other situations. -- This message posted from opensolaris.org
How''s that working for you? Seems like it would be as straightforward as I was thinking - only possible. -- This message posted from opensolaris.org
Thanks Cindy - I''ve been looking for an admin guide! I''ll play with the split command - sounds interesting. -- This message posted from opensolaris.org
Yeah, that''s starting to sound like a fairly simple but equally robust solution. That may be the final solution. Thanks! -- This message posted from opensolaris.org
True! I don''t need the same level of redundancy on the backup as the primary. -- This message posted from opensolaris.org
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Dav BanksThis message:> How''s that working for you? Seems like it would be as straightforward > as I was thinking - only possible.And this message:> Yeah, that''s starting to sound like a fairly simple but equally robust > solution. That may be the final solution. Thanks!Didn''t include any reference to what you were replying about. So I don''t know which messages you were replying to when you sent those. If you''re using the jive forums, and you wish to carry on dialogue with people who are using email, it''s recommended to copy & paste what you''re replying to, into your reply, so the recipients know what you''re replying to. I am guessing you''re replying to people saying "use zfs send" So my answer is: It works very well. Another feature, in favor of zfs send instead of mirrors, is the fact that you can have your backup media compressed while your main pool probably isn''t. And so forth. The opposite is also true. If you have any special properties set on your main pool, they won''t automatically be set on your receiving pool. So I personally recommend saving "zpool get all" and "zfs get all" into a txt file, and store it along with your backup media. So you have it available, if ever there were any confusion about it at all.
On 27/07/2010 13:28, Edward Ned Harvey wrote:> The opposite is also true. If you have any special properties set on your > main pool, they won''t automatically be set on your receiving pool. So I > personally recommend saving "zpool get all" and "zfs get all" into a txt > file, and store it along with your backup media. So you have it available, > if ever there were any confusion about it at all.PSARC/2010/193 defines a solution to solve that problem without having to save away a copy of ''zfs get all''. http://arc.opensolaris.org/caselog/PSARC/2010/193/mail -- Darren J Moffat
On Jul 27, 2010, at 7:13 AM, Darren J Moffat wrote:> On 27/07/2010 13:28, Edward Ned Harvey wrote: >> The opposite is also true. If you have any special properties set on your >> main pool, they won''t automatically be set on your receiving pool. So I >> personally recommend saving "zpool get all" and "zfs get all" into a txt >> file, and store it along with your backup media. So you have it available, >> if ever there were any confusion about it at all. > > PSARC/2010/193 defines a solution to solve that problem without having to save away a copy of ''zfs get all''. > > http://arc.opensolaris.org/caselog/PSARC/2010/193/mailAgree. This is a better solution because some configurable parameters are hidden from "zfs get all" -- richard -- ZFS and performance consulting http://www.RichardElling.com
> From: Richard Elling [mailto:richard.elling at gmail.com] > > > http://arc.opensolaris.org/caselog/PSARC/2010/193/mail > > Agree. This is a better solution because some configurable parameters > are hidden from "zfs get all"Forgive me for not seeing it ... That link is extremely dense, and 34 pages long ... Is there an option, that will capture properties better than "get all"? What is the suggested solution? I don''t see anything in "man zfs" ... but maybe it''s only available in a later version of zfs?
On 28/07/2010 14:53, Edward Ned Harvey wrote:>> From: Richard Elling [mailto:richard.elling at gmail.com] >> >>> http://arc.opensolaris.org/caselog/PSARC/2010/193/mail >> >> Agree. This is a better solution because some configurable parameters >> are hidden from "zfs get all" > > Forgive me for not seeing it ... That link is extremely dense, and 34 pages > long ...It basically says that ''zfs send'' gets a new ''-b'' option so "send back properties", and ''zfs recv'' gets a ''-o'' and ''-x'' option to allow explicit set/ignore of properties in the stream. It also adds a ''-r'' option for ''zfs set''. -b Sends only received property values whether or not they are overridden by local settings, but only if the dataset has ever been received. Use this option when you want ''zfs receive'' to restore received properties backed up on the sent dataset and to avoid sending local settings that may have nothing to do with the source dataset, but only with how the data is backed up.> Is there an option, that will capture properties better than "get all"? > What is the suggested solution?If/when the approved changes integrate it will look like: zfs send -Rb foo | <transport> | zfs recv ...> I don''t see anything in "man zfs" ... but maybe it''s only available in a > later version of zfs?Based on the source code change history for onnv-gate it doesn''t appear to have integrated yet. -- Darren J Moffat
> From: Darren J Moffat [mailto:darrenm at opensolaris.org] > > It basically says that ''zfs send'' gets a new ''-b'' option so "send back > properties", and ''zfs recv'' gets a ''-o'' and ''-x'' option to allow > explicit set/ignore of properties in the stream. It also adds a ''-r'' > option for ''zfs set''. > > If/when the approved changes integrate it will look like: > > Based on the source code change history for onnv-gate it doesn''t appear > to have integrated yet.Ahh. So, for now I''m sticking with "zpool get all" and "zfs get all" stored in a text file, unless somebody has a better idea...