Hi there, i want to use zfs with emc BCVs in the aspect of save TCO. Till now i used VxVm so it worked geat, because VxVm habe there Database in the privat region, so it is no problem porting the Hdd to other host and import the volumes to rw. My Question is whether there is the same possibility of using this possibilities with ZFS ? Thans in advance for Support. coco3 This message posted from opensolaris.org
coco3
2005-Dec-09 08:33 UTC
[zfs-discuss] Re: zfs and BCV (EMC - business continuance volume)
nobody have an idea about that :-( no one have moved hdds with zfs inside amongs hosts ? coco3 This message posted from opensolaris.org
Casper.Dik at Sun.COM
2005-Dec-09 12:21 UTC
[zfs-discuss] Re: zfs and BCV (EMC - business continuance volume)
>nobody have an idea about that :-( > >no one have moved hdds with zfs inside amongs hosts ?I''m not sure what the meaning of the sentence is; I recently moved two storage arrays from a Sun Blade 1000 to a Tyan 2885 (2 way AMD Opteron) and all I needed to do was "zfs import". Is that what you''re looking for? Casper
Robert Milkowski
2005-Dec-09 16:04 UTC
[zfs-discuss] zfs and BCV (EMC - business continuance volume)
Hello coco3, Thursday, December 8, 2005, 12:05:19 PM, you wrote: c> Hi there, c> i want to use zfs with emc BCVs in the aspect of save TCO. c> Till now i used VxVm so it worked geat, because VxVm habe there Database in the privat region, so it is no problem porting the Hdd to other host and import the volumes to rw. c> My Question is whether there is the same possibility of using this possibilities with ZFS ? 1. you should make a snapshot before splitting BCVs 2. after you split BCVs you should be able to import them as a pool on another machine however probably you will have to use force option (as pool was not exported). I don''t know however how zfs will react if both BCVs and LUNs to which BCVs were synchronized are exposed to the same server (in theory the same pool structure on different diks). I''m not sure if deviceids are different in this case or are the same. I think it will probably work if only BCVs are presented to different host (and using force option). And only snapshot will be consistent. Probably if you can you should just try it. btw: probably with arrays with BCV-like functionality it would be useful if zfs would allow to freeze entire pool for a moment (in a consistent state). That way it would make life easier with BCVs, etc. -- Best regards, Robert mailto:rmilkowski at task.gda.pl
Cyril Plisko
2005-Dec-09 19:20 UTC
[zfs-discuss] zfs and BCV (EMC - business continuance volume)
Robert, On 12/9/05, Robert Milkowski <rmilkowski at task.gda.pl> wrote:> Hello coco3,> btw: probably with arrays with BCV-like functionality it would be > useful if zfs would allow to freeze entire pool for a moment (in a > consistent state). That way it would make life easier with BCVs, etc. >Isn''t it a feature of ZFS that its disk image is always in a consistant state ? -- Regards, Cyril
Jason King
2005-Dec-09 21:37 UTC
[zfs-discuss] Re: zfs and BCV (EMC - business continuance volume)
> Robert, > > On 12/9/05, Robert Milkowski <rmilkowski at task.gda.pl> > wrote: > > Hello coco3, > > > btw: probably with arrays with BCV-like > functionality it would be > > useful if zfs would allow to freeze entire pool for > a moment (in a > > consistent state). That way it would make life > easier with BCVs, etc. > > > > Isn''t it a feature of ZFS that its disk image is > always in a consistant > state ? > > -- > Regards, > Cyril > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss >I actually asked about this a while ago (though in a bit more generic fashion since other arrays like Hitachis also have similar functionality to BCVs). You do not need to freeze the pool -- each filesystem is always consistent on disk. Keep in mind that means. Operations that alter data -- write, unlink, require multiple backend updates to implement -- update mtime, update data blocks, etc. With zfs, the ''always consistent'' means that those operations will always happen successfully, or effectively not happen at all. This is becase an interruption while doing the multiple backend operations does not cause the disk data to become inconsistent with zfs -- since it''s copy on write, the updates are written to free space, and if each backend update succeeds, then the result is visible, if one of them fails, then they are just discarded. Since they are using new data, the old stuff is still there untouched. With other filesystems, they update the existing data in place, so an interruption in the middle of doing one of those backend operations requires more sophisticated methods to hopefully move things back to a consistent state (i.e. logging or fsck). However, zfs does not (presently at least) give you the ability to perform multiple posix file operations in a transactional manner (though I suspect it wouldn''t be difficult to implement for operations occurring within the same pool). So even if the ondisk state is consistent, with BCVs, you could still have one files that are out of sync with each other -- there is really not way to communicate that intent to zfs. What I suspect is meant is the ability to snapshot an entire pool at once instead of each filesystem individually. At least that''s my understanding... This message posted from opensolaris.org
Torrey McMahon
2005-Dec-09 21:40 UTC
[zfs-discuss] zfs and BCV (EMC - business continuance volume)
Cyril Plisko wrote:> Robert, > > On 12/9/05, Robert Milkowski <rmilkowski at task.gda.pl> wrote: > >> Hello coco3, >> > > >> btw: probably with arrays with BCV-like functionality it would be >> useful if zfs would allow to freeze entire pool for a moment (in a >> consistent state). That way it would make life easier with BCVs, etc. >> >> > > Isn''t it a feature of ZFS that its disk image is always in a consistant > state ? >With its self, sure. However, something like a BCV is not going to take note of the ZFS updates going on during the BCV replication.
Robert Milkowski
2005-Dec-10 17:48 UTC
[zfs-discuss] Re: zfs and BCV (EMC - business continuance volume)
Well, depenmds on configuration of your pool. If your pool is built only from one LUN then perhaps it will work. However if you have multiple LUNs in your pool then you need multiple BCVs - and in that case you actually can''t guarantee all these BCVs are split at the same time - this could be a problem. Even with one LUN I would do a snapshot anyway. And then thhere''s a problem with already running applications - they could have open files, etc. - but these are standard issues with backup system - not strictly ZFS related. Additionally I belive that making snapshot you force ZFS flush it caches to the gizen dataset - so now you can be sure what you are actually have on BCVs. The other option is to export entire pool (IIRC just unmounting all datasets doesn''t flush caches). This message posted from opensolaris.org
coco3
2005-Dec-12 07:32 UTC
[zfs-discuss] Re: zfs and BCV (EMC - business continuance volume)
Hi, Thanks for the comments. I have now readed the Zfs-Admin documentation, and i think zfs would work. I will try test zfs next. Bye. coco3 This message posted from opensolaris.org
Richard Elling
2005-Dec-13 01:18 UTC
[zfs-discuss] Re: zfs and BCV (EMC - business continuance volume)
> Well, depenmds on configuration of your pool. If your > pool is built only from one LUN then perhaps it will > work. However if you have multiple LUNs in your pool > then you need multiple BCVs - and in that case you > actually can''t guarantee all these BCVs are split at > the same time - this could be a problem.This will be an interesting failure analysis. It is not yet clear to me how ZFS will react to mixed ordering of I/Os to LUNs. But it is something I will be studying. From what I know so far, ZFS will be more tolerant than any other file system I''ve studied. That said, KISS is always the best policy.> Even with one LUN I would do a snapshot anyway.>From a storage management perspective, I agree thatwell-timed snapshots will be a big win. But I do not agree that they are needed to maintain ZFS consistency. More likely, we will have to break out the analysis based on whether you are doing synchronous (ordered) or asynchronous (unordered) remote replication. The answer may be different for each, based upon the various failure modes present. NB, this is a generic replication problem which applies to a wide variety of remote replication products. Since they all tend to have the same features, I''d rather analyze the situation generically. -- richard This message posted from opensolaris.org