BJ Quinn
2009-Jan-28 21:17 UTC
[zfs-discuss] need to add space to zfs pool that''s part of SNDR replication
I have two servers set up, with two drives each. The OS is stored on one drive, and the data on the second drive. I have SNDR replication set up between the two servers for the data drive only. I''m running out of space on my data drive, and I''d like to do a simple "zpool attach" command to add a second data drive. Of course, this will break my replication unless I can also get the second drive replicating. What can I do? Do I simply add a second data drive to both servers and format them as I did the first drive (space for bitmap partitions, etc.) and then do a command like the following -- sndradm -ne server1 /dev/rdsk/[2nd data drive s0] /dev/rdsk/[2nd data drive s0] server2 /dev/rdsk/[2nd data drive s1] /dev/rdsk/[2nd data drive s1] ip sync g [some name other than my first synced drive''s group name] Is that all there is to it? In other words, zfs will be happy as long as both drives are being synced? And is this the way to sync them, independently, with a "sndradm -ne" command set up and running for each drive to be replicated, or is there a better way to do it? Thanks! -- This message posted from opensolaris.org
Jim Dunham
2009-Jan-28 21:44 UTC
[zfs-discuss] need to add space to zfs pool that''s part of SNDR replication
BJ Quinn wrote:> I have two servers set up, with two drives each. The OS is stored > on one drive, and the data on the second drive. I have SNDR > replication set up between the two servers for the data drive only. > > I''m running out of space on my data drive, and I''d like to do a > simple "zpool attach" command to add a second data drive. Of > course, this will break my replication unless I can also get the > second drive replicating. > > What can I do? Do I simply add a second data drive to both servers > and format them as I did the first drive (space for bitmap > partitions, etc.) and then do a command like the following -- > > sndradm -ne server1 /dev/rdsk/[2nd data drive s0] /dev/rdsk/[2nd > data drive s0] server2 /dev/rdsk/[2nd data drive s1] /dev/rdsk/[2nd > data drive s1] ip sync g [some name other than my first synced > drive''s group name]If you were to enable the SNDR replica before giving the new disk to ZFS, then there is no data to be synchronized, as both disks are uninitialized. Then when the disk is given to ZFS, only the ZFS metadata write I/Os need to be replicated. The means to specify this is "sndradm -nE ...", when ''E'' is equal enabled. The "g [some name other than my first synced drive''s group name]", needs to be "g [same name as first synced drive''s group name]". The concept here is that all vdevs in a singe ZFS storage pool must be write-order consistent. The manner in which SNDR can guarantee that two or more volumes are write-order consistent, as they are replicated is place them in the same I/O consistency group.> Is that all there is to it? In other words, zfs will be happy as > long as both drives are being synced? And is this the way to sync > them, independently, with a "sndradm -ne" command set up and running > for each drive to be replicated, or is there a better way to do it? > > Thanks! > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussJim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc.
BJ Quinn
2009-Jan-28 23:07 UTC
[zfs-discuss] need to add space to zfs pool that''s part of SNDR replication
> The means to specify this is "sndradm -nE ...", > when ''E'' is equal enabled.Got it. Nothing on the disk, nothing to replicate (yet).>The manner in which SNDR can guarantee that >two or more volumes are write-order consistent, as they are >replicated is place them in the same I/O consistency group.Ok, so my "sndradm -nE" command with "g [same name as first data drive group]" simply ADDs a set of drives to the group, it doesn''t stop or replace the replication on the first set of drives, and in fact in keeping the same group name I even keep the two sets of drives in each server in sync. THEN I run my "zfs attach" command on the non-bitmap slice to my existing pool. Do I have that all right? Thanks! -- This message posted from opensolaris.org
Jim Dunham
2009-Jan-29 10:12 UTC
[zfs-discuss] need to add space to zfs pool that''s part of SNDR replication
BJ,>> The means to specify this is "sndradm -nE ...", >> when ''E'' is equal enabled. > > Got it. Nothing on the disk, nothing to replicate (yet).:-)>> The manner in which SNDR can guarantee that >> two or more volumes are write-order consistent, as they are >> replicated is place them in the same I/O consistency group. > > Ok, so my "sndradm -nE" command with "g [same name as first data > drive group]" simply ADDs a set of drives to the group, it doesn''t > stop or replace the replication on the first set of drives, and in > fact in keeping the same group name I even keep the two sets of > drives in each server in sync. THEN I run my "zfs attach" command > on the non-bitmap slice to my existing pool. Do I have that all > right?Yes.> > > Thanks! > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussJim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc.
BJ Quinn
2009-Feb-02 18:47 UTC
[zfs-discuss] need to add space to zfs pool that''s part of SNDR replication
Then what if I ever need to export the pool on the primary server and then import it on the replicated server. Will ZFS know which drives should be part of the stripe even though the device names across servers may not be the same? -- This message posted from opensolaris.org
Jim Dunham
2009-Feb-02 19:02 UTC
[zfs-discuss] need to add space to zfs pool that''s part of SNDR replication
BJ Quinn wrote:> Then what if I ever need to export the pool on the primary server > and then import it on the replicated server. Will ZFS know which > drives should be part of the stripe even though the device names > across servers may not be the same?Yes, "zpool import ...." will figure it out. See a demo at: http://blogs.sun.com/constantin/entry/csi_munich_how_to_save Jim> > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss