Hi Is it possible to convert live 3 disks zpool from raidz to raidz2 And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch. Thanks This message posted from opensolaris.org
Hi there, On Mon, 2007-04-02 at 00:37 -0700, homerun wrote:> Is it possible to convert live 3 disks zpool from raidz to raidz2Unfortunately not - you''d need to backup your data, destroy the pool, create the new pool and restore your data.> And is it possible to add 1 new disk to raidz configuration > without backups and recreating zpool from cratch.You can add a disk to a raidz configuration, but then that makes a pool containing 1 raidz + 1 additional disk in a dynamic stripe configuration (which ZFS will warn you about, since you have different fault tolerance then) eg. # mkfile 64m 1 2 3 4 # zpool create mypool raidz `pwd`/1 `pwd`/2 `pwd`/3 # zpool status -v mypool pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /tmp/1 ONLINE 0 0 0 /tmp/2 ONLINE 0 0 0 /tmp/3 ONLINE 0 0 0 errors: No known data errors # zpool add mypool `pwd`/4 invalid vdev specification use ''-f'' to override the following errors: mismatched replication level: pool uses raidz and new vdev is file # zpool add -f mypool `pwd`/4 # zpool status -v mypool pool: mypool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 /tmp/1 ONLINE 0 0 0 /tmp/2 ONLINE 0 0 0 /tmp/3 ONLINE 0 0 0 /tmp/4 ONLINE 0 0 0 errors: No known data errors # cheers, tim -- Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops http://blogs.sun.com/timf
On Mon, Apr 02, 2007 at 12:37:24AM -0700, homerun wrote:> Is it possible to convert live 3 disks zpool from raidz to raidz2 > And is it possible to add 1 new disk to raidz configuration without > backups and recreating zpool from scratch.The reason that''s not possible is because RAID-Z uses a variable stripe width. This solves some problems (notably the RAID-5 write hole [1]), but it means that a given ''stripe'' over N disks in a raidz1 configuration may contains as many as floor(N/2) parity blocks -- clearly a single additional disk wouldn''t be sufficient to grow the stripe properly. It would be possible to have a different type of RAID-Z where stripes were variable-width to avoid the RAID-5 write hole, but the remainder of the stripe was left unused. This would allow users to add an additional parity disk (or several if we ever implement further redundancy) to an existing configuration, BUT would potentially make much less efficient use of storage. Adam [1] http://blogs.sun.com/bonwick/entry/raid_z -- Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
Tim Foster wrote:>> And is it possible to add 1 new disk to raidz configuration >> without backups and recreating zpool from cratch. >> > > You can add a disk to a raidz configuration, but then that makes a pool > containing 1 raidz + 1 additional disk in a dynamic stripe configuration > (which ZFS will warn you about, since you have different fault tolerance > then) eg. >You can do that, but then if /tmp/4 fails (in the example), you loose all of the data on that disk. If you were running raid5, you probably care about data-loss and cost -- and this configuration only cares about cost. -Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070403/fc69d3a4/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3271 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070403/fc69d3a4/attachment.bin>
On Tue, 2007-04-03 at 10:54 -0400, Luke Scharf wrote:> Tim Foster wrote: > > You can add a disk to a raidz configuration, but then that makes a pool > > containing 1 raidz + 1 additional disk in a dynamic stripe configuration > > (which ZFS will warn you about, since you have different fault tolerance > > then) eg.> You can do that, but then if /tmp/4 fails (in the example), you loose > all of the data on that disk. If you were running raid5, you probably > care about data-loss and cost -- and this configuration only cares > about cost.Exactly - sorry I thought the implication was clear :-)>-- Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops http://blogs.sun.com/timf