Hello all, I am a complete newbie to OpenSolaris, and must to setup a ZFS NAS. I do have linux experience, but have never used ZFS. I have tried to install OpenSolaris Developer 134 on a 11TB HW RAID-5 virtual disk, but after the installation I can only use one 2TB disk, and I cannot partition the rest. I realize that maximum partition size is 2TB, but I guess the rest must be usable. For hardware I am using HP ProLiant DL180G6, 12 1TB disks connected to P212 controller in RAID-5. Could someone direct me or suggest what I am doing wrong. Any help is greatly appreciated. Cheers, Dusan -- This message posted from opensolaris.org
On Wed, Mar 24, 2010 at 11:01 AM, Dusan Radovanovic <dusan05 at gmail.com>wrote:> Hello all, > > I am a complete newbie to OpenSolaris, and must to setup a ZFS NAS. I do > have linux experience, but have never used ZFS. I have tried to install > OpenSolaris Developer 134 on a 11TB HW RAID-5 virtual disk, but after the > installation I can only use one 2TB disk, and I cannot partition the rest. I > realize that maximum partition size is 2TB, but I guess the rest must be > usable. For hardware I am using HP ProLiant DL180G6, 12 1TB disks connected > to P212 controller in RAID-5. Could someone direct me or suggest what I am > doing wrong. Any help is greatly appreciated. > > Cheers, > Dusan >You would be much better off installing to a small internal disk, and then creating a separate pool for the 11TB of storage. The 2TB limit is because it''s a boot drive. That limit should go away if you''re using it as a separate storage pool. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100324/15fa973b/attachment.html>
Hi On Wednesday 24 March 2010 17:01:31 Dusan Radovanovic wrote:> connected to P212 controller in RAID-5. Could someone direct me or suggest > what I am doing wrong. Any help is greatly appreciated. >I don''t know, but I would get around this like this: My suggestion would be to configure the HW RAID controller to act as a "dumb" JBOD controller and thus make the 12 disks visible to the OS. Then you can start playing around with ZFS on these disks, e.g. creating different pools: zpool create testpool raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 \ raidz c0t6d0 c0t7d0 c0t8d0 c0t9d0 c0t10d0 c0t11d0 (Caveat: this is from the top of my head and might be - very -wrong). This would create something like "RAID50". Then I would start reading, reading and testing and testing :) HTH Carsten
I believe that write caching is turned off on the boot drives or is it the controller or both? Which could be a big problem. On 03/24/10 11:07, Tim Cook wrote:> > > On Wed, Mar 24, 2010 at 11:01 AM, Dusan Radovanovic <dusan05 at gmail.com > <mailto:dusan05 at gmail.com>> wrote: > > Hello all, > > I am a complete newbie to OpenSolaris, and must to setup a ZFS > NAS. I do have linux experience, but have never used ZFS. I have > tried to install OpenSolaris Developer 134 on a 11TB HW RAID-5 > virtual disk, but after the installation I can only use one 2TB > disk, and I cannot partition the rest. I realize that maximum > partition size is 2TB, but I guess the rest must be usable. For > hardware I am using HP ProLiant DL180G6, 12 1TB disks connected to > P212 controller in RAID-5. Could someone direct me or suggest what > I am doing wrong. Any help is greatly appreciated. > > Cheers, > Dusan > > > > You would be much better off installing to a small internal disk, and > then creating a separate pool for the 11TB of storage. The 2TB limit > is because it''s a boot drive. That limit should go away if you''re > using it as a separate storage pool. > > --Tim > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >CONFIDENTIALITY NOTICE: This communication (including all attachments) is confidential and is intended for the use of the named addressee(s) only and may contain information that is private, confidential, privileged, and exempt from disclosure under law. All rights to privilege are expressly claimed and reserved and are not waived. Any use, dissemination, distribution, copying or disclosure of this message and any attachments, in whole or in part, by anyone other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify the sender immediately, delete this communication from all data storage devices and destroy all hard copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100324/6bf46a0b/attachment.html>
Thank you all for your valuable experience and fast replies. I see your point and will create one virtual disk for the system and one for the storage pool. My RAID controller is battery backed up, so I''ll leave write caching on. Thanks again, Dusan -- This message posted from opensolaris.org
On Mar 24, 2010, at 9:14 AM, Karl Rossing wrote:> I believe that write caching is turned off on the boot drives or is it the controller or both?By default, ZFS will not enable volatile write caches on disks for SMI labeled disk drives (eg boot).> Which could be a big problem.Actually, it is very rare that the synchronous write performance of a boot drive is a performance problem. Nonvolatile write caches are not a problem.> On 03/24/10 11:07, Tim Cook wrote: >> >> >> On Wed, Mar 24, 2010 at 11:01 AM, Dusan Radovanovic <dusan05 at gmail.com> wrote: >> Hello all, >> >> I am a complete newbie to OpenSolaris, and must to setup a ZFS NAS. I do have linux experience, but have never used ZFS. I have tried to install OpenSolaris Developer 134 on a 11TB HW RAID-5 virtual disk, but after the installation I can only use one 2TB disk, and I cannot partition the rest. I realize that maximum partition size is 2TB, but I guess the rest must be usable. For hardware I am using HP ProLiant DL180G6, 12 1TB disks connected to P212 controller in RAID-5. Could someone direct me or suggest what I am doing wrong. Any help is greatly appreciated.Simple. Make a small LUN, say 20GB or so, and install the OS there. -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
On 24.03.2010 17:42, Richard Elling wrote:> Nonvolatile write caches are not a problem.Which is why ZFS isn''t a replacement for proper array controllers (defining proper as those with sufficient battery to leave you with a seemingly intact filesystem), but a very nice augmentation for them. ;) As "someone" pointed out in another thread: Proper storage still takes proper planning. ;) //Svein -- Sending mail from a temporary set up workstation, as my primary W500 is off for service. PGP not installed.
On Mar 24, 2010, at 10:05 AM, Svein Skogen wrote:> On 24.03.2010 17:42, Richard Elling wrote: >> Nonvolatile write caches are not a problem. > > Which is why ZFS isn''t a replacement for proper array controllers (defining proper as those with sufficient battery to leave you with a seemingly intact filesystem), but a very nice augmentation for them. ;)Nothing prevents a clever chap from building a ZFS-based array controller which includes nonvolatile write cache. However, the economics suggest that the hybrid storage pool model can provide a highly dependable service at a lower price-point than the traditional array designs.> As "someone" pointed out in another thread: Proper storage still takes proper planning. ;)Good advice :-) -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
> Thank you all for your valuable experience and fast replies. I see your > point and will create one virtual disk for the system and one for the > storage pool. My RAID controller is battery backed up, so I''ll leave > write caching on.I think the point is to say: ZFS software raid is both faster and more reliable than your hardware raid. Surprising though it may be for a newcomer, I have statistics to back that up, and explanation of how it''s possible. If you want to know. You will do best if you configure the raid controller to JBOD. Yes it''s ok to enable WriteBack on all those disks, but just use the raid card for write buffering, not raid. The above suggestion might be great ideally. But how do you boot from some disk which isn''t attached to the raid controller? Most servers don''t have any other option ... So you might just make a 2-disk mirror, use that as a boot volume, and then JBOD all the other disks. That''s somewhat a waste of disk space, but it might be your best solution. This is in fact, what I do. I have 2x 1TB disks dedicated to nothing but the OS. That''s tremendous overkill. And all the other disks are a data pool. All of the disks are 1TB, because it greatly simplifies the usage of a hotspare... And I''m wasting nearly 1TB on the OS disks.
> > Which is why ZFS isn''t a replacement for proper array controllers > (defining proper as those with sufficient battery to leave you with a > seemingly intact filesystem), but a very nice augmentation for them. ;) > > Nothing prevents a clever chap from building a ZFS-based array > controller > which includes nonvolatile write cache. However, the economics suggest > that the hybrid storage pool model can provide a highly dependable > service > at a lower price-point than the traditional array designs.I don''t have finished results that are suitable for sharing yet, but I''m doing a bunch of benchmarks right now that suggest: -1- WriteBack enabled is much faster for writing than WriteThrough. (duh.) -2- Ditching the WriteBack, and using a ZIL instead, is even faster than that. Oddly, the best performance seems to be using ZIL, with all the disks WriteThrough. You actually get slightly lower performance if you enable the ZIL together with WriteBack. My theory to explain the results I''m seeing is: Since the ZIL performs best for zillions of tiny write operations and the spindle disks perform best for large sequential writes, I suspect the ZIL accumulates tiny writes until they add up to a large sequential write, and then they''re flushed to spindle disks. In this configuration, the HBA writeback cannot add any benefit, because the datastreams are already optimized for the device they''re writing to. Yet, by enabling the WriteBack, you introduce a small delay before writes begin to hit the spindle. By switching to WriteThrough, you actually get better performance. As counter-intuitive as that may seem. :-) So, if you''ve got access to a pair of decent ZIL devices, you''re actually faster and more reliable to run all your raid and caching and buffering via ZFS instead of using a fancy HBA.
On 03/24/10 12:54, Richard Elling wrote:> > Nothing prevents a clever chap from building a ZFS-based array controller > which includes nonvolatile write cache.+1 to that. Something that is inexpensive and small (4GB?) and works in a PCI express slot. CONFIDENTIALITY NOTICE: This communication (including all attachments) is confidential and is intended for the use of the named addressee(s) only and may contain information that is private, confidential, privileged, and exempt from disclosure under law. All rights to privilege are expressly claimed and reserved and are not waived. Any use, dissemination, distribution, copying or disclosure of this message and any attachments, in whole or in part, by anyone other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify the sender immediately, delete this communication from all data storage devices and destroy all hard copies.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 24.03.2010 19:53, Karl Rossing wrote:> On 03/24/10 12:54, Richard Elling wrote: >> >> Nothing prevents a clever chap from building a ZFS-based array controller >> which includes nonvolatile write cache. > > +1 to that. Something that is inexpensive and small (4GB?) and works in > a PCI express slot.Maybe someone should look at implementing the zfs code for the XScale range of io-processors (such as the IOP333)? //Svein - -- - --------+-------------------+------------------------------- /"\ |Svein Skogen | svein at d80.iso100.no \ / |Solberg ?stli 9 | PGP Key: 0xE5E76831 X |2020 Skedsmokorset | svein at jernhuset.no / \ |Norway | PGP Key: 0xCE96CE13 | | svein at stillbilde.net ascii | | PGP Key: 0x58CD33B6 ribbon |System Admin | svein-listmail at stillbilde.net Campaign|stillbilde.net | PGP Key: 0x22D494A4 +-------------------+------------------------------- |msn messenger: | Mobile Phone: +47 907 03 575 |svein at jernhuset.no | RIPE handle: SS16503-RIPE - --------+-------------------+------------------------------- If you really are in a hurry, mail me at svein-mobile at stillbilde.net This mailbox goes directly to my cellphone and is checked even when I''m not in front of my computer. - ------------------------------------------------------------ Picture Gallery: https://gallery.stillbilde.net/v/svein/ - ------------------------------------------------------------ -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.12 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkuqYa4ACgkQSBMQn1jNM7Z32QCbBfyhDz34vTkSNIT0JO9gbgZ2 TkUAoPlRbirW5VQ0bYS3k/kmbOWaUUc0 =SDFD -----END PGP SIGNATURE-----
On Wed, Mar 24, 2010 at 08:02:06PM +0100, Svein Skogen wrote:> Maybe someone should look at implementing the zfs code for the XScale > range of io-processors (such as the IOP333)?NetBSD runs on (many of) those. NetBSD has an (in-progress, still-some-issues) ZFS port. Hopefully they will converge in due course to provide exactly this. The particularly nice thing would be that, using ZFS in the "RAID controller firmware" like this would result in contents that are interchangable with standard zfs, needing just an import/export. This is a big improvement over many other dedicated raid solutions, and provides good comfort when thinking about recovery scenarios for a controller failure. Unfortunately, it would mostly be only useful with zvols for presentation to the host - there''s not a good interface, and usually not much RAM, for the controller to run all the ZPL layer. That would still be useful for controllers running in non-ZFS servers, as an alternative to external boxes with comstar and various transports. If you could find a way to get a zfs send/recv stream through from the controller, though, some interesting deployment possibilities open up. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100325/0a333eb3/attachment.bin>
On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey <solaris2 at nedharvey.com> wrote:> I think the point is to say: ?ZFS software raid is both faster and more > reliable than your hardware raid. ?Surprising though it may be for a > newcomer, I have statistics to back that up,Can you share it?> You will do best if you configure the raid controller to JBOD.Problem: HP''s storage controller doesn''t support that mode. -- Fajar
Fajar A. Nugraha wrote:> On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey > <solaris2 at nedharvey.com> wrote: >> I think the point is to say: ZFS software raid is both faster and more >> reliable than your hardware raid. Surprising though it may be for a >> newcomer, I have statistics to back that up, > > Can you share it? > >> You will do best if you configure the raid controller to JBOD. > > Problem: HP''s storage controller doesn''t support that mode.It does, ish. It forces you to create a bunch of single disk raid 0 logical drives. It''s what we do at work on our HP servers running ZFS. The bigger problem is that you have to script around a disk failure, as the array won''t bring a non-redundant logicaldrive back online after a disk failure without being kicked (which is a good thing in general, but annoying for ZFS). -- Carson
Carson Gaspar wrote:> Fajar A. Nugraha wrote: >> On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey >> <solaris2 at nedharvey.com> wrote: >>> I think the point is to say: ZFS software raid is both faster and more >>> reliable than your hardware raid. Surprising though it may be for a >>> newcomer, I have statistics to back that up, >> >> Can you share it? >> >>> You will do best if you configure the raid controller to JBOD. >> >> Problem: HP''s storage controller doesn''t support that mode. > > It does, ish. It forces you to create a bunch of single disk raid 0 > logical drives. It''s what we do at work on our HP servers running ZFS. > > The bigger problem is that you have to script around a disk failure, as > the array won''t bring a non-redundant logicaldrive back online after a > disk failure without being kicked (which is a good thing in general, but > annoying for ZFS).*sigh* too tired - I meant after you replace a failed disk. Obviously it won''t come back online while the disk is failed... -- Carson
On Thu, Mar 25, 2010 at 10:31 AM, Carson Gaspar <carson at taltos.org> wrote:> Fajar A. Nugraha wrote: >>> You will do best if you configure the raid controller to JBOD. >> >> Problem: HP''s storage controller doesn''t support that mode. > > It does, ish. It forces you to create a bunch of single disk raid 0 logical > drives. It''s what we do at work on our HP servers running ZFS.that''s different. Among other things, it won''t allow tools like smartctl to work.> > The bigger problem is that you have to script around a disk failure, as the > array won''t bring a non-redundant logicaldrive back online after a disk > failure without being kicked (which is a good thing in general, but annoying > for ZFS).How do you replace a bad disk then? Is there some userland tool for opensolaris which can tell the HP array to bring that disk back up? Or do you have to restart the server, go to BIOS, and enable it there? -- Fajar
Fajar A. Nugraha wrote:> On Thu, Mar 25, 2010 at 10:31 AM, Carson Gaspar <carson at taltos.org> wrote: >> Fajar A. Nugraha wrote: >>>> You will do best if you configure the raid controller to JBOD. >>> Problem: HP''s storage controller doesn''t support that mode. >> It does, ish. It forces you to create a bunch of single disk raid 0 logical >> drives. It''s what we do at work on our HP servers running ZFS. > > that''s different. Among other things, it won''t allow tools like > smartctl to work. > >> The bigger problem is that you have to script around a disk failure, as the >> array won''t bring a non-redundant logicaldrive back online after a disk >> failure without being kicked (which is a good thing in general, but annoying >> for ZFS). > > How do you replace a bad disk then? Is there some userland tool for > opensolaris which can tell the HP array to bring that disk back up? Or > do you have to restart the server, go to BIOS, and enable it there?hpacucli will do it (usually /opt/HPQacucli/sbin/hpacucli). You need to: # Wipe the new disk. Not strictly necessary, but I''m paranoid hpacucli crtl slot=$n physicaldrive $fixeddrive modify erase # And online the LD... hpacucli ctrl slot=$n logicaldrive $ld modify reenable forced -- Carson
Hi, Actually the idea of having the ZFS code inside a HW raid controllers does seems quite interesting. Imagine the possibility of having any OS with raid volumes backed by all the good aspects of the ZFS, specially the checksum and the raidz vs the "raid5-write-hole" thing... I also consider the fact that if the implementation on the HW supports import/export , for migration tasks it would be great ! However since my knowledge is very very very limited in this area, i wonder if technically it''s achievable or if the challenges are far more bigger than the benefits... This would be quite interesting as a marketing tool because anyone could have the power of ZFS on their storage with or without having to deploy a solaris/opensolaris..even windows systems would benefit from this ;) ZFS for the masses, by the masses ;) Bruno On 24-3-2010 20:02, Svein Skogen wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 24.03.2010 19:53, Karl Rossing wrote: > >> On 03/24/10 12:54, Richard Elling wrote: >> >>> Nothing prevents a clever chap from building a ZFS-based array controller >>> which includes nonvolatile write cache. >>> >> +1 to that. Something that is inexpensive and small (4GB?) and works in >> a PCI express slot. >> > Maybe someone should look at implementing the zfs code for the XScale > range of io-processors (such as the IOP333)? > > //Svein > > - -- > - --------+-------------------+------------------------------- > /"\ |Svein Skogen | svein at d80.iso100.no > \ / |Solberg ?stli 9 | PGP Key: 0xE5E76831 > X |2020 Skedsmokorset | svein at jernhuset.no > / \ |Norway | PGP Key: 0xCE96CE13 > | | svein at stillbilde.net > ascii | | PGP Key: 0x58CD33B6 > ribbon |System Admin | svein-listmail at stillbilde.net > Campaign|stillbilde.net | PGP Key: 0x22D494A4 > +-------------------+------------------------------- > |msn messenger: | Mobile Phone: +47 907 03 575 > |svein at jernhuset.no | RIPE handle: SS16503-RIPE > - --------+-------------------+------------------------------- > If you really are in a hurry, mail me at > svein-mobile at stillbilde.net > This mailbox goes directly to my cellphone and is checked > even when I''m not in front of my computer. > - ------------------------------------------------------------ > Picture Gallery: > https://gallery.stillbilde.net/v/svein/ > - ------------------------------------------------------------ > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2.0.12 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iEYEARECAAYFAkuqYa4ACgkQSBMQn1jNM7Z32QCbBfyhDz34vTkSNIT0JO9gbgZ2 > TkUAoPlRbirW5VQ0bYS3k/kmbOWaUUc0 > =SDFD > -----END PGP SIGNATURE----- > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3656 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100325/f9de3e6c/attachment.bin>
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 25.03.2010 04:13, Fajar A. Nugraha wrote:> On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey > <solaris2 at nedharvey.com> wrote: >> I think the point is to say: ZFS software raid is both faster and more >> reliable than your hardware raid. Surprising though it may be for a >> newcomer, I have statistics to back that up, > > Can you share it? > >> You will do best if you configure the raid controller to JBOD. > > Problem: HP''s storage controller doesn''t support that mode.Is this particular HP controller one of the rebranded LSI Megaraid MFI or MPT ones, that has had a HP lobotomy of the original firmware? (the LSI original firmware is a _LOT_ better). If so, the hardware is identical to the MPT/MFI one, and you can use the LSI firmware (and flash back the original HP one if you want. I''ve done it both ways). //Svein - -- - --------+-------------------+------------------------------- /"\ |Svein Skogen | svein at d80.iso100.no \ / |Solberg ?stli 9 | PGP Key: 0xE5E76831 X |2020 Skedsmokorset | svein at jernhuset.no / \ |Norway | PGP Key: 0xCE96CE13 | | svein at stillbilde.net ascii | | PGP Key: 0x58CD33B6 ribbon |System Admin | svein-listmail at stillbilde.net Campaign|stillbilde.net | PGP Key: 0x22D494A4 +-------------------+------------------------------- |msn messenger: | Mobile Phone: +47 907 03 575 |svein at jernhuset.no | RIPE handle: SS16503-RIPE - --------+-------------------+------------------------------- If you really are in a hurry, mail me at svein-mobile at stillbilde.net This mailbox goes directly to my cellphone and is checked even when I''m not in front of my computer. - ------------------------------------------------------------ Picture Gallery: https://gallery.stillbilde.net/v/svein/ - ------------------------------------------------------------ -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.12 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkurKoMACgkQSBMQn1jNM7aX8QCfQXfoL4KWt8DJ8EHAyIXqCE01 wwgAniwPPABRhkuBhYanYCTAdcpTBJRq =Or38 -----END PGP SIGNATURE-----
> > I think the point is to say: ZFS software raid is both faster and > more > > reliable than your hardware raid. Surprising though it may be for a > > newcomer, I have statistics to back that up, > > Can you share it?Sure. Just go to http://nedharvey.com and you''ll see four links on the left side, "RAID Benchmarks." The simplest comparison is to look at Bob''s Method Summary. You''ll see statistics for example: raidz-5disks raid5-5disks-hardware You''ll see that it''s not a 100% victory for either one - the hardware raid is able to do sequential writes faster, and stride reads faster, but all the other categories, the raidz is faster, and by a larger margin. If all categories are equally important in your usage scenario, then the average is 3.53 vs 2.47 in favor of zfs raidz. But if your usage characteristics don''t weight the various operations equally ... Most people care about random reads and random writes more than they care about other operations ... then the results are 2.18 vs 1.52 in favor of zfs raidz. As for "more reliable" ... Here''s my justification for saying that. For starters, one fewer single point of failure. If you''re doing RAID with your HBA, and if it dies, then you risk losing your whole data set, regardless of raid and the fact that the disks are still good. Also, you can''t attach your disks to some other system to recover your data; you need to replace your HBA with an identical HBA to even stand a chance. But if you''re doing ZFS raid, there is no HBA that can die. Suppose if you needed to, you could attach your disks to another system and simply import the zfs datasets.> > You will do best if you configure the raid controller to JBOD. > > Problem: HP''s storage controller doesn''t support that mode.Sure it does. You just make a RAID0 or RAID1 with a single disk in it. And make another. And make another. Etc. This is how I do it on a Dell PERC.
> The bigger problem is that you have to script around a disk failure, as > the array won''t bring a non-redundant logicaldrive back online after a > disk failure without being kicked (which is a good thing in general, > but > annoying for ZFS).I''d like to follow up on that point. Because until recently, I always said the same thing about the Dell PERC, and I learned there''s a gray area. I bet there is for you too. If you create a hardware mirror, and one disk dies, you can simply toss another disk into the failed slot, and it will auto-resilver. No need to kick it. If you create a raid-5 or raid-6, same is true. No need to kick it. If you have a hotspare ... and the hotspare gets consumed ... then you slap a new disk into its place, and it will not automatically become a new hotspare. If you have a single disk raid-0 or raid-1, and it dies, you replace it, and it will not automatically become a new raid-0 or raid-1. You need to kick it. I don''t know what the HP equivalent is, but in a Dell, using PERC, you use the MegaCLI utility to monitor the health of your HBA. Yes, it is a pain. In a Sun, it varies based on which specific system, but they have a similar thing. Some GUI utility to monitor and reconfigure the HBA.