sridhar surampudi
2010-Nov-12 13:01 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
Hi, How I can I quiesce / freeze all writes to zfs and zpool if want to take hardware level snapshots or array snapshot of all devices under a pool ? are there any commands or ioctls or apis available ? Thanks & Regards, sridhar. -- This message posted from opensolaris.org
Darren J Moffat
2010-Nov-12 13:53 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
On 12/11/2010 13:01, sridhar surampudi wrote:> How I can I quiesce / freeze all writes to zfs and zpool if want to take hardware level snapshots or array snapshot of all devices under a pool ? > are there any commands or ioctls or apis available ?zpool export <pool> zpool import <pool> That is the only documented and supported way to do it that I''m aware of, and yes that does take the pool off line but that way you can be sure it isn''t changing. The only other way I know of to freeze a pool is for testing purposes only and if you want to learn about that you need to read the code because I''m not going to disclose it here in case it is miss used. -- Darren J Moffat
sridhar surampudi
2010-Nov-15 07:45 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
Hi Darren, Thanks you for the details. I am aware of export/import of zpool. but with zpool export pool is not available for writes. is there a way I can freeze zfs file system at file system level. As an example, for JFS file system using "chfs -a freeze ..." option. So if I am taking a hardware snapshot, I will run chfs at file system (jfs ) level then fire commands to take snapshot at harware level (or for array LUNS) to get consistent backup. I thins case, no down time is required for the file system. Once snapshot is done, i will do qu quiesce / freeze the file system. Looking for how to do similar freeze for zfs file system. Thanks & Regards, sridhar. -- This message posted from opensolaris.org
sridhar surampudi
2010-Nov-15 07:52 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
Hi Darren, In shot I am looking a way to freeze and thaw for zfs file system so that for harware snapshot, i can do 1. run zfs freeze 2. run hardware snapshot on devices belongs to the zpool where the given file system is residing. 3. run zfs thaw Thanks & Regards, sridhar. -- This message posted from opensolaris.org
Kees Nuyt
2010-Nov-15 08:24 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
On Sun, 14 Nov 2010 23:52:52 PST, sridhar surampudi <toyours_sridhar at yahoo.co.in> wrote:>Hi Darren, > >In shot I am looking a way to freeze and thaw for zfs file system so that for harware snapshot, i can do >1. run zfs freeze >2. run hardware snapshot on devices belongs to the zpool where the given file system is residing. >3. run zfs thawThe only thing I can think of that comes close is to make a recursive snapshot of the filesystems in the zpool, then run the hardware snapshot. The zfs snapshot will be zfs-transaction consistent. Your hardware snapshot wil contain the zfs snapshots in that same state. Even if you could quiesce zfs, there is no way to make sure the files are logically consistent, because applications can do whatever they like, application transactions don''t have to synchronize with zfs transactions. There is no generic mechanism to force applications to flush their buffers/caches. For databases, a snapshot of the zfs on which database transactions are logged is important, but it will also contain unfinished database transactions. Your plan only more or less works in situations where all relevant applications can be quiesced/forced to write a consistent state, something like this: - quiesce the apps / databases - take the zfs snapshot(s) - thaw the apps - take the hardware snapshot -- ( Kees Nuyt ) c[_]
Andrew Gabriel
2010-Nov-15 08:40 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
sridhar surampudi wrote:> Hi Darren, > > In shot I am looking a way to freeze and thaw for zfs file system so that for harware snapshot, i can do > 1. run zfs freeze > 2. run hardware snapshot on devices belongs to the zpool where the given file system is residing. > 3. run zfs thawUnlike other filesystems, ZFS is always consistent on disk, so there''s no need to freeze a zpool to take a hardware snapshot. The hardware snapshot will effectively contain all transactions up to the last transaction group commit, plus all synchronous transactions up to the hardware snapshot. If you want to be sure that all transactions up to a certain point in time are included (for the sake of an application''s data), take a ZFS snapshot (which will force a TXG commit), and then take the hardware snapshot. You will not be able to access the hardware snapshot from the system which has the original zpool mounted, because the two zpools will have the same pool GUID (there''s an RFE outstanding on fixing this). The one thing you do need to be careful of is, that with a multi-disk zpool, the hardware snapshot is taken at an identical point in time across all the disks in the zpool. This functionality is usually an extra-charge option in Enterprise storage systems. If the hardware snapshots are staggered across multiple disks, all bets are off, although if you take a zfs snapshot immediately beforehand and you test import/scrub the hardware snapshot (on a different system) immediately (so you can repeat the hardware snapshot again if it fails), maybe you will be lucky. The right way to do this with zfs is to send/recv the datasets to a fresh zpool, or (S10 Update 9) to create an extra zpool mirror and then split it off with zpool split. -- Andrew Gabriel
sridhar surampudi
2010-Nov-15 09:50 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
Hi Andrew, Regarding your point ------------- You will not be able to access the hardware snapshot from the system which has the original zpool mounted, because the two zpools will have the same pool GUID (there''s an RFE outstanding on fixing this). ------------------------ Could you please provide more references to above? as I am looking for options of accessing the snapshot device by reconfiguring it with an new pool name ( in turn as new GUID). Thanks & Regards, sridhar. -- This message posted from opensolaris.org
Freddie Cash
2010-Nov-15 18:34 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
On Sun, Nov 14, 2010 at 11:45 PM, sridhar surampudi <toyours_sridhar at yahoo.co.in> wrote:> Thanks you for the details. I am aware of export/import of zpool. but with zpool export pool is not available for writes. > > is there a way I can freeze zfs file system at file system level. > As an example, for JFS file system using "chfs -a freeze ..." option. > So if I am taking a hardware snapshot, I will run chfs at file system (jfs ) level then fire commands to take snapshot at harware level (or for array LUNS) to get consistent backup. I thins case, no down time is required for the file system. > > Once snapshot is done, i will do qu quiesce / freeze the file system. > > Looking for how to do similar freeze for zfs file system.You would need to do it at the *pool* level, not the filesystem level. And the only way to guarantee that no writes will be done to a pool is to take the pool offline via zpool export. One more reason to stop using hardware storage systems and just let ZFS handle the drives directly. :) -- Freddie Cash fjwcash at gmail.com
Ian Collins
2010-Nov-15 20:18 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
On 11/15/10 10:50 PM, sridhar surampudi wrote:> Hi Andrew, > > Regarding your point > ------------- > You will not be able to access the hardware > snapshot from the system which has the original zpool mounted, because > the two zpools will have the same pool GUID (there''s an RFE outstanding > on fixing this). > ------------------------ > Could you please provide more references to above? as I am looking for options of accessing the snapshot device by reconfiguring it with an new pool name ( in turn as new GUID). > >Why can''t you do things the preferred ZFS way? To quote Andrew: "The right way to do this with zfs is to send/recv the datasets to a fresh zpool, or (S10 Update 9) to create an extra zpool mirror and then split it off with zpool split." -- Ian.
sridhar surampudi
2010-Nov-16 06:19 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
Hi, How it would help for instant recovery or point in time recovery ?? i.e restore data at device/LUN level ? Currently it is easy as I can unwind the primary device stack and restore data at device/ LUN level and recreate stack. Thanks & Regards, sridhar. -- This message posted from opensolaris.org
Ian Collins
2010-Nov-16 06:59 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
On 11/16/10 07:19 PM, sridhar surampudi wrote:> Hi, > > How it would help for instant recovery or point in time recovery ?? i.e restore data at device/LUN level ? > >Why would you want to? If you are sending snapshots to another pool, you can do instant recovery at the pool level.> Currently it is easy as I can unwind the primary device stack and restore data at device/ LUN level and recreate stack. > >It''s probably easier with ZFS to restore data at the pool or filesystem level from snapshots. Trying to work at the device level is just adding an extra level of complexity to a problem already solved. -- Ian.
Andrew Gabriel
2010-Nov-16 07:54 UTC
[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
Sridhar, You have switched to a new disruptive filesystem technology, and it has to be disruptive in order to break out of all the issues older filesystems have, and give you all the new and wonderful features. However, you are still trying to use old filesystem techniques with it, which is why things don''t fit for you, and you are missing out on the more powerful way ZFS presents these features to you. On 11/16/10 06:59 AM, Ian Collins wrote:> On 11/16/10 07:19 PM, sridhar surampudi wrote: >> Hi, >> >> How it would help for instant recovery or point in time recovery ?? >> i.e restore data at device/LUN level ? >> > Why would you want to? If you are sending snapshots to another pool, > you can do instant recovery at the pool level.Point in time recovery is a feature of ZFS snapshots. What''s more, with ZFS you can see all your snapshots online all the time, read and/or recover just individual files or whole datasets, and the storage overhead is very efficient. If you want to recover a whole LUN, that''s presumably because you lost the original, and in this case the system won''t have the original filesystem mounted.> >> Currently it is easy as I can unwind the primary device stack and >> restore data at device/ LUN level and recreate stack. >> > It''s probably easier with ZFS to restore data at the pool or > filesystem level from snapshots. > > Trying to work at the device level is just adding an extra level of > complexity to a problem already solved. >I won''t claim ZFS couldn''t better support use of back-end Enterprise storage, but in this case, you haven''t given any use cases where that''s relevant. -- Andrew