Folks,>From zfs documentation, it appears that a "vdev" can be built from more vdevs. That is, a raidz vdev can be built across a bunch of mirrored vdevs, and a mirror can be built across a few raidz vdevs.Is my understanding correct? Also, is there a limit on the depth of a vdev? Thank you in advance for your help. Regards, Peter -- This message posted from opensolaris.org
On Mon, Nov 8, 2010 at 3:27 PM, Peter Taps <ptrtap at yahoo.com> wrote:> Folks, > > From zfs documentation, it appears that a "vdev" can be built from more > vdevs. That is, a raidz vdev can be built across a bunch of mirrored vdevs, > and a mirror can be built across a few raidz vdevs. > > Is my understanding correct? Also, is there a limit on the depth of a vdev? > > Thank you in advance for your help. > > Regards, > Peter > >No, you cannot do multi-level vdevs. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101108/6f25e1d5/attachment.html>
+------------------------------------------------------------------------------ | On 2010-11-08 13:27:09, Peter Taps wrote: | | From zfs documentation, it appears that a "vdev" can be built from more vdevs. That is, a raidz vdev can be built across a bunch of mirrored vdevs, and a mirror can be built across a few raidz vdevs. | | Is my understanding correct? Also, is there a limit on the depth of a vdev? You are incorrect. The man page states: Virtual devices cannot be nested, so a mirror or raidz vir- tual device can only contain files or disks. Mirrors of mir- rors (or other combinations) are not allowed. A pool can have any number of virtual devices at the top of the configuration (known as "root vdevs"). Data is dynami- cally distributed across all top-level devices to balance data among devices. As new virtual devices are added, ZFS automatically places data on the newly available devices. A pool can have any number of virtual devices at the top of the configuration (known as "root vdevs"). Data is dynami- cally distributed across all top-level devices to balance data among devices. As new virtual devices are added, ZFS automatically places data on the newly available devices. -- bdha cyberpunk is dead. long live cyberpunk.
Bryan Horstmann-Allen wrote:> +------------------------------------------------------------------------------ > | On 2010-11-08 13:27:09, Peter Taps wrote: > | > | From zfs documentation, it appears that a "vdev" can be built from more vdevs. That is, a raidz vdev can be built across a bunch of mirrored vdevs, and a mirror can be built across a few raidz vdevs. > | > | Is my understanding correct? Also, is there a limit on the depth of a vdev? >It looks like there is confusion coming from the use of the terms virtual device and vdev. The documentation can be confusing in this regard. There are two types of vdevs: root and leaf. A pool''s root vdevs are usually of the ''mirror'' or ''raidz'' type, but can also directly use the underlying devices if you don''t want any redundancy from ZFS whatsoever. Pools dynamically stripe data across all the root vdevs present (and not yet full) in that pool at the time the data was written. Leaf vdevs directly use the underlying devices. Underlying devices may be hard drives, solid state drives, iSCSI volumes, or even files on filesystems. Root vdevs cannot directly be used as underlying devices.> You are incorrect. > > The man page states: > > Virtual devices cannot be nested, so a mirror or raidz vir- > tual device can only contain files or disks. Mirrors of mir- > rors (or other combinations) are not allowed. > > A pool can have any number of virtual devices at the top of > the configuration (known as "root vdevs"). Data is dynami- > cally distributed across all top-level devices to balance > data among devices. As new virtual devices are added, ZFS > automatically places data on the newly available devices. > > A pool can have any number of virtual devices at the top of > the configuration (known as "root vdevs"). Data is dynami- > cally distributed across all top-level devices to balance > data among devices. As new virtual devices are added, ZFS > automatically places data on the newly available devices. >This has been touched on and discussed in some previous threads. There is a way to perform nesting, but it is *not* recommended. The trick is to insert another abstraction layer that hides ZFS from itself (or in other words convert a root vdev into an underlying device). An example would be creating iSCSI targets out of a ZFS pool, and then creating a second ZFS pool out of those iSCSI targets. Another example would be creating a ZFS pool out of files stored on another ZFS pool. The main reasons that have been given for not doing this are unknown edge and corner cases that may lead to deadlocks, and that it creates a complex structure with potentially undesirable and unintended performance and reliability implications. Deadlocks may occur in low resource conditions. If resources (disk space and RAM) never run low, the deadlock scenarios may not arise.
<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; color: rgb(0, 0, 0); font-size: 14px; font-family: Calibri, sans-serif; "><div><div><div><span class="Apple-style-span" style="font-size: 12px; font-family: Consolas, monospace; "><blockquote id="MAC_OUTLOOK_ATTRIBUTION_BLOCKQUOTE" style="border-left-color: rgb(181, 196, 223); border-left-width: 5px; border-left-style: solid; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 5px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 5px; "><div><span style="font-family: Calibri; ">creating a ZFS pool out of files stored on another ZFS pool. The main</span></div><div><span style="font-family: Calibri; ">reasons that have been given for not doing this are unknown edge and</span></div><div><span style="font-family: Calibri; ">corner cases that may lead to deadlocks, and that it creates a complex</span></div><div><span style="font-family: Calibri; ">structure with potentially undesirable and unintended performance and</span></div><div><span style="font-family: Calibri; ">reliability implications.</span></div></blockquote><div><span style="font-family: Calibri; "><br></span></div><div><span style="font-family: Calibri; ">Computers are continually encountering unknown edge and corner cases in the various things they do all the time. That''s what we have testing for.</span></div><div><span style="font-family: Calibri; "><br></span></div><blockquote id="MAC_OUTLOOK_ATTRIBUTION_BLOCKQUOTE" style="border-left-color: rgb(181, 196, 223); border-left-width: 5px; border-left-style: solid; padding-top: 0px; padding-right: 0px; padding-bottom: 0px; padding-left: 5px; margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 5px; "><div><span style="font-family: Calibri; ">Deadlocks may occur in low resource</span></div><div><span style="font-family: Calibri; ">conditions. If resources (disk space and RAM) never run low, the</span></div><div><span style="font-family: Calibri; ">deadlock scenarios may not arise.</span></div></blockquote><div><span style="font-family: Calibri; "><br></span></div><div><span style="font-family: Calibri; ">It sounds like you mean any low resource condition. Presumably, utilizing </span><span class="Apple-style-span" style="font-family: Calibri; ">complex pool structures like these will tax resources, but there are many other ways to do that.</span></div><div><span style="font-family: Calibri; "><br></span></div><div><span style="font-family: Calibri; ">--</span></div><div><span style="font-family: Calibri; ">Maurice Volaski,</span><span style="font-family: Calibri; "> </span><a href="mailto:maurice.volaski at einstein.yu.edu"><span style="font-family: Calibri; ">maurice.volaski at einstein.yu.edu</span></a></div><div><span style="font-family: Calibri; ">Computing Support</span></div><div><span style="font-family: Calibri; ">Dominick P. Purpura Department of Neuroscience</span></div><div><span style="font-family: Calibri; ">Albert Einstein College of Medicine of Yeshiva University</span></div><div><span style="font-family: Calibri; "><br></span></div><div><br></div></span></div></div></div></body></html>
On 09/11/10 11:46 AM, Maurice Volaski wrote:> <html>...</html> >Is that horrendous mess Outlook''s fault? If so, please consider not using it. --Toby
I think my initial response got mangled. Oops.>creating a ZFS pool out of files stored on another ZFS pool. The main >reasons that have been given for not doing this are unknown edge and >corner cases that may lead to deadlocks, and that it creates a complex >structure with potentially undesirable and unintended performance and >reliability implications.Computers are continually encountering unknown edge and corner cases in the various things they do all the time. That''s what we have testing for.>Deadlocks may occur in low resource >conditions. If resources (disk space and RAM) never run low, the >deadlock scenarios may not arise.It sounds like you mean any low resource condition. Presumably, utilizing complex pool structures like these will tax resources, but there are many other ways to do that. -- Maurice Volaski, maurice.volaski at einstein.yu.edu Computing Support Dominick P. Purpura Department of Neuroscience Albert Einstein College of Medicine of Yeshiva University
>On 09/11/10 11:46 AM, Maurice Volaski wrote: >> <html>...</html> >> > >Is that horrendous mess Outlook''s fault? If so, please consider not >using it.Yes, it is. :-( Outlook 2011 on the Mac, which just came out, so perhaps I''ll get lucky and they will fix it..eventually. -- Maurice Volaski, maurice.volaski at einstein.yu.edu Computing Support Dominick P. Purpura Department of Neuroscience Albert Einstein College of Medicine of Yeshiva University
Maurice Volaski wrote:> I think my initial response got mangled. Oops. > > >> creating a ZFS pool out of files stored on another ZFS pool. The main >> reasons that have been given for not doing this are unknown edge and >> corner cases that may lead to deadlocks, and that it creates a complex >> structure with potentially undesirable and unintended performance and >> reliability implications. >> > > Computers are continually encountering unknown edge and corner cases in > the various things they do all the time. That''s what we have testing for. >I agree. The earlier discussions of this topic raised the issue that this is not a well tested area and is an unsupported configuration. Some the of problems that arise in nested pool configurations may also arise in supported pool configurations; nested pools may significantly aggravate the problems. The trick is to find test cases in supported configurations so the problems can''t simply be swept under the rug of "unsupported configuration".>> Deadlocks may occur in low resource >> conditions. If resources (disk space and RAM) never run low, the >> deadlock scenarios may not arise. >> > > It sounds like you mean any low resource condition. Presumably, utilizing > complex pool structures like these will tax resources, but there are many > other ways to do that. >We have seen ZFS systems lose stability under low resource conditions. They don''t always gracefully degrade/throttle back performance as resources run very low. I see a parallel in the 64 bit vs 32 bit ZFS code...the 32 bit code has much tighter resource constraints put on it due to memory addressing limits, and we see notes in many places that the 32 bit code is not production ready and not recommended unless you have no other choice. The machines the 32 bit code is run on also tend to have tighter physical resource limits, which compounds the problems. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101109/a0338757/attachment-0001.html>