Hi, How difficult would it be to write some code to change the GUID of a pool? ---- Thanks Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080701/cf494cc1/attachment.html>
> How difficult would it be to write some code to change the GUID of a pool?As a recreational hack, not hard at all. But I cannot recommend it in good conscience, because if the pool contains more than one disk, the GUID change cannot possibly be atomic. If you were to crash or lose power in the middle of the operation, your data would be gone. What problem are you trying to solve? Jeff
On Wed, Jul 2, 2008 at 9:55 AM, Peter Pickford <peter at netremedies.ca> wrote:> Hi, > > How difficult would it be to write some code to change the GUID of a pool?Not too difficult - I did it some time ago for a customer, who wanted it badly. I guess you are trying to import pools cloned by the storage itself. Am I close ? -- Regards, Cyril
Hi Jeff, What I''m trying to do is import many copies of a pool that is cloned on a storage array. ZFS will only import the first disk (there is only one disk in the pool) and any clones have the same pool name and GUID and are ignored. Is there any chance Sun will support external cloned disks and add an option to generate a new GUID on import in the near future? Veritas 5.0 supports a similar idea and allows disks to be tagged and the disk group to be imported using the tag with an option to generate new GUIDs. Cyril has kindly sent me some code so my immediate problem is probably resolved but don''t you think this would be better handled as part of zpool import? Thanks Peter 2008/7/2 Jeff Bonwick <Jeff.Bonwick at sun.com>:> > How difficult would it be to write some code to change the GUID of a > pool? > > As a recreational hack, not hard at all. But I cannot recommend it > in good conscience, because if the pool contains more than one disk, > the GUID change cannot possibly be atomic. If you were to crash or > lose power in the middle of the operation, your data would be gone. > > What problem are you trying to solve? > > Jeff >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080707/36b7df91/attachment.html>
On Wed, Jul 2, 2008 at 2:10 AM, Jeff Bonwick <Jeff.Bonwick at sun.com> wrote:>> How difficult would it be to write some code to change the GUID of a pool? > > As a recreational hack, not hard at all. ?But I cannot recommend it > in good conscience, because if the pool contains more than one disk, > the GUID change cannot possibly be atomic. ?If you were to crash or > lose power in the middle of the operation, your data would be gone. > > What problem are you trying to solve?I''ve been trying to figure out how I will do this with iSCSI LUNs cloned at a storage device. The basic flow would be as shown below. Note that things get really sticky around 4a. If I don''t have a way to change the GUID I think that I am stuck cloning via zfs send|receive or cpio. 1. On Vendor X''s storage device (X may or may not be Sun) a. Create an iSCSI LUN b. Grant sun1 access to this LUN 2. On Solaris box named sun1 create and customize a master zone (or ldom) a. Make LUN available b. zpool create master /dev/dsk/$whatever c. zfs set mounpoint=/zones/master master d. zonecfg -z master create e. zonecfg -z master set zonepath=/zones f. zoneadm -z master install g. Customize master zone as needed h. zoneadm -z master detach i. zpool export master 3. On storage device make clones of master device a. Make many clones of master, making each into a LUN b. Provision each LUN to several servers 4. Final customization on one of servers from 3b a. Import each LUN with a new zpool name b. Set mountpoint to /zones/$newzonename c. Attach zone (fix zonepath, sysidcfg, etc.) d. Detach zone e. Export zpool 5. Configure HA for each zone a. Each zone should be able to fail over independently of others b. Set start-up host based on load, priorities, etc. c. Start all zone workloads While the various zones are running, steps 3-5 will likely be repeated from time to time as new zones need to be provisioned. Notice that in this arrangement the only thing that has important data is the shared storage - each server is a dataless FRU. If Vendor X supports deduplication of live data (hint) I only need about 25% of space that I would need if I weren''t using clones + deduplication. -- Mike Gerdts http://mgerdts.blogspot.com/
Does anyone have the code/script to change the GUID of a ZFS pool? -- This message posted from opensolaris.org
On Thu, Jul 9, 2009 at 8:42 PM, Norbert<no-reply at opensolaris.org> wrote:> Does anyone have the code/script to change the GUID of a ZFS pool?I did such tool for my client around a year ago and that client agreed to release the code. However, the API I''ve used is has been changed and not available anymore. So you cannot compile it on recent Nevada releases. I may consider to retrofit it if I will have enough time and motivation. -- Regards, Cyril
Cyril, Can you please post the code, I will try to update it and get it to compile as I have a customer with the requirement. Thanks, JK -- This message posted from opensolaris.org
Cyril, I would be very interested in this code as well as we would like to accomplish the same thing with ZFS and storage based replication to remount to the same host. Anything you can share would be greatly appreicated. -- This message posted from opensolaris.org
hi Cyril, I also need to change a guid of a zpool (again because of cloning at LUN level has produced a duplicate). Have you a solution? CD -- This message posted from opensolaris.org
Hi I am looking in similar lines, my requirement is 1. create a zpool on one or many devices ( LUNs ) from an array ( array can be IBM or HPEVA or EMC etc.. not SS7000). 2. Create file systems on zpool 3. Once file systems are in use (I/0 is happening) I need to take snapshot at array level a. Freeze the zfs flle system ( not required due to zfs consistency : source : mailing groups) b. take array snapshot ( say .. IBM flash copy ) c. Got new snapshot device (having same data and metadata including same GUID of source pool) Now I need a way to change the GUID and pool of snapshot device so that the snapshot device can be accessible on same host or an alternate host (if the LUN is shared). Could you please post commands for the same. Regards, sridhar. -- This message posted from opensolaris.org
sridhar surampudi wrote:> Hi I am looking in similar lines, > > my requirement is > > 1. create a zpool on one or many devices ( LUNs ) from an array ( array can be IBM or HPEVA or EMC etc.. not SS7000). > 2. Create file systems on zpool > 3. Once file systems are in use (I/0 is happening) I need to take snapshot at array level > a. Freeze the zfs flle system ( not required due to zfs consistency : source : mailing groups) > b. take array snapshot ( say .. IBM flash copy ) > c. Got new snapshot device (having same data and metadata including same GUID of source pool) > > Now I need a way to change the GUID and pool of snapshot device so that the snapshot device can be accessible on same host or an alternate host (if the LUN is shared). > > Could you please post commands for the same.There is no way I know of currently. (There was an unofficial program floating around to do this on much earlier opensolaris versions, but it no longer works). If you have a support contract, raise a call and asked to be added to RFE 6744320. -- Andrew Gabriel
Are those really your requirements? What is it that you''re trying to accomplish with the data? Make a copy and provide to an other host? On 11/15/2010 5:11 AM, sridhar surampudi wrote:> Hi I am looking in similar lines, > > my requirement is > > 1. create a zpool on one or many devices ( LUNs ) from an array ( array can be IBM or HPEVA or EMC etc.. not SS7000). > 2. Create file systems on zpool > 3. Once file systems are in use (I/0 is happening) I need to take snapshot at array level > a. Freeze the zfs flle system ( not required due to zfs consistency : source : mailing groups) > b. take array snapshot ( say .. IBM flash copy ) > c. Got new snapshot device (having same data and metadata including same GUID of source pool) > > Now I need a way to change the GUID and pool of snapshot device so that the snapshot device can be accessible on same host or an alternate host (if the LUN is shared). > > Could you please post commands for the same. > > Regards, > sridhar.
On Nov 15, 2010, at 2:11 AM, sridhar surampudi wrote:> Hi I am looking in similar lines, > > my requirement is > > 1. create a zpool on one or many devices ( LUNs ) from an array ( array can be IBM or HPEVA or EMC etc.. not SS7000). > 2. Create file systems on zpool > 3. Once file systems are in use (I/0 is happening) I need to take snapshot at array level > a. Freeze the zfs flle system ( not required due to zfs consistency : source : mailing groups) > b. take array snapshot ( say .. IBM flash copy ) > c. Got new snapshot device (having same data and metadata including same GUID of source pool) > > Now I need a way to change the GUID and pool of snapshot device so that the snapshot device can be accessible on same host or an alternate host (if the LUN is shared).Methinks you need to understand a little bit of architecture. If you have an exact copy, then it is indistinguishable from the original. If ZFS (or insert favorite application here) sees two identical views of the data that are not, in fact, identical, then you break the assumption that the application makes. By changing the GUID you are forcing them to not be identical, which is counter to the whole point of "hardware snapshots." Perhaps what you are trying to do and the method you have chosen are not compatible. BTW, I don''t understand why you make a distinction between other arrays and the SS7000 above. If I make a snapshot of a zvol, then it is identical from the client''s perspective, and the same conditions apply. -- richard
Actually, I did this very thing a couple of years ago with M9000s and EMC DMX4s ... with the exception of the "same host" requirement you have (i.e. the thing that requires the GUID change). If you want to import the pool back into the host where the cloned pool is also imported, it''s not just the zpool''s GUID that needs to be changed, but all the vdevs in the pool too. When I did some work on OpenSolairis in Amazon S3, I noticed that someone had build a zpool mirror split utility (before we had the real thing) as a means to clone boot disk images. IIRC it was just a hack of zdb, but with the ZFS source out there it''s not that impossible to take a zpool and change all its GUIDs, it''s just not that trivial (the Amazon case only handled a single simple mirrored vdev). Anyway, back to my EMC scenario... The dear data centre staff I had to work with mandated the use of good old EMC BCVs. I pointed out that ZFS''s "always consistent in disk" promise meant that it would "just work" but that this required an consistent snapshot of all the LUNs in the pool (a feature in addition to basic BCVs that EMC charged even more for). Hoping to save money, my customer ignored my advice, and very quickly learned the error of their ways! The "always consistent on disk" promise cannot be honoured if the vdev are snapshot at different times. On a quiet system you may get lucky in simple tests, only to find that a snapshot from a busy production system causes a system panic on import (although the more recent automatic uberblock recovery may save you). The other thing I would add to your procedure is to take a ZFS snapshot just before taking the storage level snapshot. You could sync this with quiescing applications, but the real benefit is that you have a known point in time where all non-sync application level writes are temporally consistent. Phil http://harmanholistix.com On 15 Nov 2010, at 10:11, sridhar surampudi <toyours_sridhar at yahoo.co.in> wrote:> Hi I am looking in similar lines, > > my requirement is > > 1. create a zpool on one or many devices ( LUNs ) from an array ( array can be IBM or HPEVA or EMC etc.. not SS7000). > 2. Create file systems on zpool > 3. Once file systems are in use (I/0 is happening) I need to take snapshot at array level > a. Freeze the zfs flle system ( not required due to zfs consistency : source : mailing groups) > b. take array snapshot ( say .. IBM flash copy ) > c. Got new snapshot device (having same data and metadata including same GUID of source pool) > > Now I need a way to change the GUID and pool of snapshot device so that the snapshot device can be accessible on same host or an alternate host (if the LUN is shared). > > Could you please post commands for the same. > > Regards, > sridhar. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss