Congratulations on the release of ZFS. Is it possible to expand an existing raid-z in ZFS?>From what I read in the docs, it appears that I can create a pool with a raid-z and add another raid-z later to expand the size, but I don''t see anything related to expanding the original raid-z.For example, say I have a system with six 400GB drives in a raid-z and want to add two more 400GB drives to the pool at some later date (and keep redundancy of some kind). I can add the two new drives as a mirrored pair or a new 2 disk raid-z (which both waste a whole disk worth of space). I don''t see any options to grow an existing raid-z. I guess I''m looking for something like "zpool expand raidz". Did I miss this somewhere in the docs? If this is not currently supported, are there plans to support this in the future? Thanks for any help, Jason This message posted from opensolaris.org
I''ve not tested this but the %man zpool says.. --- zpool attach [-f] pool device new_device Attach new_device to the mirror containing device. If device is not currently part of any mirror or raidz, then device automatically transforms into a two-way mir- ror of device and new_device. In either case, new_device begins to resilver immediately. The "zpool status" com- mand reports the progress of the resilver. --- /Per This message posted from opensolaris.org
Thanks a bunch. Guess I missed that - I was mainly looking at the "ZFS Administration Guide" (zfsadmin_1016.pdf) and don''t see it covered in there. I had searched that doc for "raid" and "expand", but not the man pages. Sorry for not RTFMs fully... I''m burning CDs now and I''ll test this out once I get my system running. Jason This message posted from opensolaris.org
No, you cannot expand a RAID-Z stripe. It''s fundamentally a fixed width stripe. For more information on RAID-Z, check out Jeff''s blog: http://blogs.sun.com/roller/page/bonwick?entry=raid_z - Eric On Fri, Nov 18, 2005 at 10:29:30AM -0800, Jason Upton wrote:> Congratulations on the release of ZFS. > > Is it possible to expand an existing raid-z in ZFS? > > >From what I read in the docs, it appears that I can create a pool with a raid-z and add another raid-z later to expand the size, but I don''t see anything related to expanding the original raid-z. > > For example, say I have a system with six 400GB drives in a raid-z and want to add two more 400GB drives to the pool at some later date (and keep redundancy of some kind). I can add the two new drives as a mirrored pair or a new 2 disk raid-z (which both waste a whole disk worth of space). I don''t see any options to grow an existing raid-z. I guess I''m looking for something like "zpool expand raidz". > > Did I miss this somewhere in the docs? > > If this is not currently supported, are there plans to support this in the future? > > Thanks for any help, > Jason > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Hmm, sounds like a manpage bug. The underlying kernel implementation allows you to ''mirrorize'' individual devices in a RAID-Z vdev (this is how the ''replacing'' vdev is implemented). But you cannot attach a device to a RAID-Z vdev via the CLI. - Eric On Fri, Nov 18, 2005 at 11:07:34AM -0800, Per ?berg wrote:> I''ve not tested this but the %man zpool says.. > --- > zpool attach [-f] pool device new_device > > Attach new_device to the mirror containing device. If > device is not currently part of any mirror or raidz, > then device automatically transforms into a two-way mir- > ror of device and new_device. In either case, new_device > begins to resilver immediately. The "zpool status" com- > mand reports the progress of the resilver. > --- > > /Per > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
"Attach new_device to the mirror" That quote specifically mentions that attach is used with a mirror. The way I read it it will only work for mirrors, not raid-z devices. I hope I''m wrong, but that''s the way I intarpret it. /Magnus This message posted from opensolaris.org
Should I take this to mean that there will never be a way to make a bigger raid-z out of an existing raid-z without having somewhere else to store all the data on the raid-z? I really hope that''s not the case. If I want to add a disk to an array it''s because I need more space. I''m rather unlikely to have enough space available to store the data while recreating the array with more disks. Adding another raid-z to the pool seems a quite poor alternative to me, as the amount of redundent data is doubled in the pool(assuming identically sized discs) without fault tolerance being improved at all. Please tell me that there''s hope for a future way of expanding a raid-z in place :) Not neccessarily while in use of course. This message posted from opensolaris.org
On Fri, Nov 18, 2005 at 11:52:59AM -0800, Magnus Lidbom wrote:> Should I take this to mean that there will never be a way to make a > bigger raid-z out of an existing raid-z without having somewhere else > to store all the data on the raid-z?Yes, that is a relatively safe bet. Among other reasons, all the DVAs (device virtual addresses) are calculated from the fixed size of the RAID-Z device. That means if you were to add a new device to the raid-z stripe, all your existing addresses would be wrong. Also, in order to do a complete replacement, we would need to support mirrors of raid-Z vdevs. This is planned for a future release.> I really hope that''s not the case. If I want to add a disk to an array > it''s because I need more space. I''m rather unlikely to have enough > space available to store the data while recreating the array with more > disks.Why not just use dynamic striping? This may not make sense if you want to add a single disk, but makes much more sense when adding a group of disks. Two 4-way RAID-Z devices is a much better configuration than a single 8-way in many ways.> Adding another raid-z to the pool seems a quite poor alternative to > me, as the amount of redundent data is doubled in the pool(assuming > identically sized discs) without fault tolerance being improved at > all.Actually, no data is made redundant. Adding another raid-z to the pool will cause data to be dynamically striped between the two vdevs, not mirrored. And you will get better performance as well. Adding a device to a raid-z stripe will actually _decrease_ fault tolerance, not increase it. For example, imagine two scenarios: a) 8-disk RAID-Z b) 2 x 4-disk RAID-Z In the first case, you can handle one bad device per 8 in your system. In the latter case, you can handle one bad device per group of 4 in your system. The latter has better fault tolerance (while taking only a small space penalty for the cost of parity information).> Please tell me that there''s hope for a future way of expanding a > raid-z in place :) Not neccessarily while in use of course.Yes, this is theoretically possible to do while the pool is offline. But besides the incredible complexity in migrating the data in the first place, all ZFS operations have been designed to operate while the pool is online. It would be a shame to start a trend in the opposite direction. That being said, if you think you can develop a tool to do this, let us know. The source code is out there for a reason! - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
> Lidbom wrote: > > Should I take this to mean that there will never be > a way to make a > > bigger raid-z out of an existing raid-z without > having somewhere else > > to store all the data on the raid-z? > > Yes, that is a relatively safe bet. Among other > reasons, all the DVAs > (device virtual addresses) are calculated from the > fixed size of the > RAID-Z device. That means if you were to add a new > device to the raid-z > stripe, all your existing addresses would be wrong. > Also, in order to do a complete replacement, we would > need to support > mirrors of raid-Z vdevs. This is planned for a > future release.There''s plenty in your comment that went straight above my head. What I got out of it was basically: "It''s quite difficult to implement". I hope that won''t stop you, or someone else, from implementing it in the end.> Why not just use dynamic striping? This may not make > sense if you want > to add a single disk, but makes much more sense when > adding a group of > disks. Two 4-way RAID-Z devices is a much better > configuration than a > single 8-way in many ways.Exactly. For major upgrades a new raid-z makes sense. For an upgrade on a budget where you add just one disk it''s not an option. For adding two disks it''s possible, but half the space is "wasted" by parity data.> Actually, no data is made redundant. Adding another > raid-z to the pool > will cause data to be dynamically striped between the > two vdevs, not > mirrored.I thought dynamic striping was just splitting data across vdevs, with no parity information or fault tolerance unless the vdevs provide it. Doesn''t the new raid-z have it''s own parity information? Making for double the amount of parity data that you would have with just one raid-z (assuming all physical discs are the same size)?>And you will get better performance as > well.I''m probably just flaunting my ignorance here, but I don''t get it. Why would multible smaller raid-z devices provide better performance than one bigger raid-z using the same disks? Could you recommend some source of documentation that explains performance considerations for different raid-z configurations? I''ve read all I could find in the Administration guide and the zpool man page about raid-z, but saw no mention of this.> > Please tell me that there''s hope for a future way > of expanding a > > raid-z in place :) Not neccessarily while in use of > course. > > Yes, this is theoretically possible to do while the > pool is offline. > But besides the incredible complexity in migrating > the data in the first > place, all ZFS operations have been designed to > operate while the pool > is online. It would be a shame to start a trend in > the opposite > direction. > > That being said, if you think you can develop a tool > to do this, let us > know. The source code is out there for a reason!I get a minor hubris infection every now and then, not severe enough to make me believe I''m capable of implementing such o tool though :) Best regards /Magnus This message posted from opensolaris.org
Hi, I will cite original email instead of citing last email from this thread ;) . There are some options to expand RAID-Z group on the fly, i.e. with zero or minimum downtime. As it was mentioned earlier it is not possible to expand RAID-Z group from raidz(disk1, disk2, disk3, disk4, disk5, disk6) to raidz(disk1, disk2, disk3, disk4, disk5, disk6, disk7) due to architectural reasons, but ... Here I will describe few approaches: I) Using dynamic LUN extension capability of ZFS and zpool replace. Here is simple example: [root at sun] mkfile 80m /test/1 [root at sun] mkfile 80m /test/2 [root at sun] mkfile 80m /test/3 [root at sun] mkfile 80m /test/4 [root at sun] mkfile 160m /test/5 [root at sun] mkfile 160m /test/6 [root at sun] mkfile 160m /test/7 [root at sun] mkfile 160m /test/8 [root at sun] zpool create -f pool raidz /test/1 /test/2 /test/3 /test/4 [root at sun] zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 302M 54.0K 302M 0% ONLINE - [root at sun] zpool status pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 raidz ONLINE 0 0 0 /test/1 ONLINE 0 0 0 /test/2 ONLINE 0 0 0 /test/3 ONLINE 0 0 0 /test/4 ONLINE 0 0 0 [root at sun] zpool replace pool /test/1 /test/5 [root at sun] zpool replace pool /test/2 /test/6 [root at sun] zpool replace pool /test/3 /test/7 [root at sun] zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 302M 211K 302M 0% ONLINE - [root at sun] zpool status pool: pool state: ONLINE scrub: resilver completed with 0 errors on Tue Nov 29 13:13:31 2005 config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 raidz ONLINE 0 0 0 /test/5 ONLINE 0 0 0 /test/6 ONLINE 0 0 0 /test/7 ONLINE 0 0 0 14.5K resilvered /test/4 ONLINE 0 0 0 [root at sun] zpool replace pool /test/4 /test/8 [root at sun] zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 622M 262K 622M 0% ONLINE - [root at sun] zpool status pool: pool state: ONLINE scrub: resilver completed with 0 errors on Tue Nov 29 13:14:11 2005 config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 raidz ONLINE 0 0 0 /test/5 ONLINE 0 0 0 /test/6 ONLINE 0 0 0 /test/7 ONLINE 0 0 0 /test/8 ONLINE 0 0 0 11.0K resilvered What we get at the end is expanded RAID-Z group with zero downtime! You can use this method to replace disks in RAID-Z group one by one with larger capacity disks. II) If your RAID-Z group consists of disks/LUNs from storage array with RAID controller, you can expand LUNs size at RAID controller level. ZFS will see LUN size expansion immediately. I cannot test it now, but this is what I know about ZFS behaviour. III) To move data from raidz(disk1, disk2, disk3, disk4, disk5, disk6) to raidz(disk6, disk7, disk8, disk9, disk10, disk11, disk12) group you can use zfs backup/restore for every dataset. To minimize downtime you can use full and incremental backups. In this case short application downtime will be required. This method can be considered as RAID-Z expansion. IV) In future one can imagine RAID-Z expansion from raidz(disk1, disk2, disk3, disk4, disk5, disk6) to raidz(concat(disk1,disk7), concat(disk2,disk8), concat(disk3,disk9), concat(disk4,disk10), concat(disk5,disk11), concat(disk6,disk12)) group, where size of disk1 is not necessary equal to size of disk7. But ZFS developers should answer for themselves if this ZFS enhancement is worth of their effort. In my opinion - yes. V) In future basing on approach (IV) adding one disk to raidz(disk1, disk2, disk3, disk4, disk5, disk6) could be considered as adding of slices in following manner: raidz(concat(disk1,disk7slice1), concat(disk2,disk7slice2), concat(disk3,disk7slice3), concat(disk4,disk7slice4), concat(disk5,disk7slice5), concat(disk6,disk7slice6)). But of course this approach has serious performance implications. Greetings, Robert Prus Jason Upton wrote:>Congratulations on the release of ZFS. > >Is it possible to expand an existing raid-z in ZFS? > >>From what I read in the docs, it appears that I can create a pool with a raid-z and add another raid-z later to expand the size, but I don''t see anything related to expanding the original raid-z. > >For example, say I have a system with six 400GB drives in a raid-z and want to add two more 400GB drives to the pool at some later date (and keep redundancy of some kind). I can add the two new drives as a mirrored pair or a new 2 disk raid-z (which both waste a whole disk worth of space). I don''t see any options to grow an existing raid-z. I guess I''m looking for something like "zpool expand raidz". > >Did I miss this somewhere in the docs? > >If this is not currently supported, are there plans to support this in the future? > >Thanks for any help, >Jason >This message posted from opensolaris.org >_______________________________________________ >zfs-discuss mailing list >zfs-discuss at opensolaris.org >http://opensolaris.org/mailman/listinfo/zfs-discuss > >
> Two 4-way RAID-Z devices is a much better configuration than a single 8-way in many ways.I thought I read somewhere you (well, the zfs team) were working on a version of raid-z with increased redundancy (resist to 2 disk failures for instance). In this case, it would make sense to put all the disks together. This message posted from opensolaris.org
All I can say is WOW! I tried this 2 ways: First like the post above to simulate adding extra drives to the system, then "moving" the pool to the new drives. Then I tried an in-place replace (make the backing file bigger, then zpool replace pool /test/{1-4} ) to simulate "upgrading" a drive at a time. This also worked great, but you have to wait for each disk to resilver before moving on (or you will lose the array). Note: both of these were tested with an iso bigger than one disk (to ensure it was on multiple disks) and md5sums were ok after each test. So if you limit your raidz''s to 3 or 4 disks, you not only get added redundancy, but you have a reasonable upgrade path (a little riskier if you do it in-place) by replacing 3 or 4 disks at a time. I didn''t test mirroring, but I assume that it would work. Now I just can''t wait for the SATA framework ... This message posted from opensolaris.org
On Tue, Nov 29, 2005 at 11:26:59AM -0800, Jeb Campbell wrote:> All I can say is WOW! > > I tried this 2 ways: > > First like the post above to simulate adding extra drives to the > system, then "moving" the pool to the new drives. > > Then I tried an in-place replace (make the backing file bigger, then > zpool replace pool /test/{1-4} ) to simulate "upgrading" a drive at a > time. This also worked great, but you have to wait for each disk to > resilver before moving on (or you will lose the array).Did you actually try multiple replaces at the same time, or are you assuming it won''t work? It actually should work just fine. If not, please file a bug. --Bill
Bill Moore wrote:> On Tue, Nov 29, 2005 at 11:26:59AM -0800, Jeb Campbell wrote: > >>All I can say is WOW! >> >>I tried this 2 ways: >> >>First like the post above to simulate adding extra drives to the >>system, then "moving" the pool to the new drives. >> >>Then I tried an in-place replace (make the backing file bigger, then >>zpool replace pool /test/{1-4} ) to simulate "upgrading" a drive at a >>time. This also worked great, but you have to wait for each disk to >>resilver before moving on (or you will lose the array). > > > Did you actually try multiple replaces at the same time, or are you > assuming it won''t work? It actually should work just fine. If not, > please file a bug.I think he''s simulating yanking a drive and replacing it with a higher capacity drive. The simulation should really have included DD''ing zeros all over the newly enlarged backing file. :-) So, just like a RAID running in degraded mode, any failure on one of the other drives during the reconstruction to the new drive will bork your data. After the resilver completes, repeat process until all the drives in your system are replaced. For those of us who can''t afford multiple JBOD enclosures, this seems like a useful if slightly risky technique. BTW, there doesn''t seem to be a way to designate a hot spare for a zpool or set of zpools. Is this a planned ZFS feature? -Jason
Sorry, I was being quick. For the in-place replace, I actually offline''d the file/drive, then moved a new file in -- and yes I should have zero''d it, but I was confident in ZFS ;) Anyway, I will test again on true disks/arrays whenever the SATA framework is merged. Getting SATA merged would be a great gift for the holidays . . . ;) This message posted from opensolaris.org
Hi, Small correction to my previuos posting: > III) To move data from raidz(disk1, disk2, disk3, disk4, disk5, disk6) to raidz(disk6, disk7, disk8, disk9, disk10, disk11, disk12) Sorry, I meant: III) To move data from raidz(disk1, disk2, disk3, disk4, disk5, disk6) to raidz(disk7, disk8, disk9, disk10, disk11, disk12, disk13) ----------------- To continue some thoughts about dynamic expansion of ZFS storage pools. VI) You can use mirrors (i.e. zpool attach/detach) to dynamically expand ZFS in some cases. [root at sun] mkfile 80m /test/1 [root at sun] mkfile 160m /test/2 [root at sun] zpool create pool /test/1 [root at sun] zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 75.5M 32.5K 75.5M 0% ONLINE - [root at sun] zpool status pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 /test/1 ONLINE 0 0 0 [root at sun] zpool attach pool /test/1 /test/2 [root at sun] zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 75.5M 51.0K 75.5M 0% ONLINE - [root at sun] zpool status pool: pool state: ONLINE scrub: resilver completed with 0 errors on Wed Nov 30 11:07:27 2005 config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror ONLINE 0 0 0 /test/1 ONLINE 0 0 0 /test/2 ONLINE 0 0 0 32.5K resilvered [root at sun] zpool detach pool /test/1 [root at sun] zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 156M 89.5K 155M 0% ONLINE - [root at sun] zpool status pool: pool state: ONLINE scrub: resilver completed with 0 errors on Wed Nov 30 11:07:27 2005 config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 /test/2 ONLINE 0 0 0 32.5K resilvered At the end you get expanded ZFS storage pool! The same applies to LUN expansion via storage array RAID controller. VII) In future one can imagine extension of this approach working in following manner: a) Creating RAID-Z group using standard syntax, e.g.: zpool create -f pool raidz disk1 disk2 disk3 disk4 disk5 disk6 b) Extending storage pool to configuration like: mirror( raidz(disk1, disk2, disk3, disk4, disk5, disk6), raidz(disk7, disk8, disk9, disk10, disk11, disk12, disk13) ) This will cause resilvering. Due to man zpool(1) this task is not allowed at this moment! > Virtual devices cannot be nested, so a mirror or raidz vir- > tual device can only contain files or disks. Mirrors of mir- > rors (or other combinations) are not allowed. c) Detaching first vdev, i.e. raidz(disk1, disk2, disk3, disk4, disk5, disk6). VIII) In future one can imagine extension of RAID-Z group working as follows: a) Creating RAID-Z group using standard syntax, e.g.: zpool create -f pool raidz disk1 disk2 disk3 disk4 disk5 disk6 b) Extending storage pool to configuration like: zpool add pool raidz disk7 disk8 disk9 disk10 disk11 disk12 disk13 to create configuration like: stripe( raidz(disk1, disk2, disk3, disk4, disk5, disk6), raidz(disk7, disk8, disk9, disk10, disk11, disk12, disk13) ) c) Detaching first vdev, i.e. raidz(disk1, disk2, disk3, disk4, disk5, disk6). This will move data from raidz(disk1, disk2, disk3, disk4, disk5, disk6) to raidz(disk7, disk8, disk9, disk10, disk11, disk12, disk13). Of course, it could take many minutes/hours to perform this task, depending on many factors, i.e. how fast are these disks, how many data is written to them, what is current application workload, etc. . What we need to perform approach (VII) and (VIII) is whole vdev removal from ZFS control. In case of approach (VII) we need possibility of using mirror( raidz(), raidz() ) configuration. ZFS team comments are welcome ;) !!! Greetings, Robert Prus
>[root at sun] mkfile 80m /test/1 >[root at sun] mkfile 160m /test/2 > NAME STATE READ WRITE CKSUM > mirror ONLINE 0 0 0 > /test/1 ONLINE 0 0 0 > /test/2 ONLINE 0 0 0 32.5K resilveredit sure would be nice to know the used and raw size of the LUNs (80m unavalible in /test/2)> stripe( raidz(disk1, disk2, disk3, disk4), raidz(disk7, disk8, disk9) ) > concat( mirror (disk1, disk2) , mirror (disk3) )detach / migration, even if its just the concat() would be an *extreamly* powerfull tool. HP''s EVA is the only device I know that can *remove* a disk from the pool. its harder to increase the number of disks than it is to replace existing disks with larger ones...
AIX''s LVM also allows you to migrate data off a disk and then detach it from the volumegroup. And I also agree that the migration and detach functions would be very handy. Is there a way to accomplish this thru ZFS''s self healing feature today? thus spake Rob Logan on Wed, Nov 30, 2005 at 10:57:59AM -0500:> > >[root at sun] mkfile 80m /test/1 > >[root at sun] mkfile 160m /test/2 > > NAME STATE READ WRITE CKSUM > > mirror ONLINE 0 0 0 > > /test/1 ONLINE 0 0 0 > > /test/2 ONLINE 0 0 0 32.5K resilvered > > it sure would be nice to know the used and raw size > of the LUNs (80m unavalible in /test/2) > > >stripe( raidz(disk1, disk2, disk3, disk4), raidz(disk7, disk8, disk9) ) > >concat( mirror (disk1, disk2) , mirror (disk3) ) > > detach / migration, even if its just the concat() would be an *extreamly* > powerfull tool. HP''s EVA is the only device I know that can *remove* a > disk from the pool. its harder to increase the number of disks than it > is to replace existing disks with larger ones... > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://opensolaris.org/mailman/listinfo/zfs-discuss-- Quid quid latine dictum sit, altum videtur
On Wed, Nov 30, 2005 at 02:11:41PM +1100, Jason Ozolins wrote:> > BTW, there doesn''t seem to be a way to designate a hot spare for a zpool > or set of zpools. Is this a planned ZFS feature? >Yes. See the ZFS FAQ on opensolaris.org. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
A feature like that would be indeed great. It''s the ONLY reason I''m not 100% sure about RAID-Z yet for my next system... 8-) This message posted from opensolaris.org
On Sun, Jul 13, 2008 at 2:03 AM, Evert Meulie <evert at meulie.net> wrote:> A feature like that would be indeed great. It''s the ONLY reason I''m not 100% sure about RAID-Z yet for my next system... 8-)The explanation that I''ve heard is that expanding raidz is of less importance to enterprise users, who are the initial target for ZFS. Most enterprise users would just attach a new drive tray and add that as another raid-z to the zpool. That being said, there is an RFE for expanding the width of a raidz: http://bugs.opensolaris.org/view_bug.do?bug_id=6718209 -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
There is any change about this? Any developer tried it? -- This message posted from opensolaris.org
>There is any change about this? Any developer tried it?"It won''t work unless someone implements it". In the case of a RAIDZ, it means reading and rewriting all data. The only thing you can do is add a new raidz to the pool. So you start with NAME STATE READ WRITE CKSUM export ONLINE 0 0 0 raidz ONLINE 0 0 0 c0d0 ONLINE 0 0 0 c1d0 ONLINE 0 0 0 c2d0 ONLINE 0 0 0 And you add a second raidz zpool add export raidz c3d0 c4d0 c5d0 Casper
Thanks for replying. I was asking because maybe someone implemente it could be as a stand-alone software no in the repository. And I know that it could mean reading and rewriting all data, but I''m installing a personal NAS, so waiting even a week can be understable. Thanks again for the reply. -- This message posted from opensolaris.org