Hello Friends Can someone please let me know how I can backup the ZFS configuration which is stored on the operating system. Thanks Sachin Palav This message posted from opensolaris.org
On Thu, Mar 20, 2008 at 7:52 PM, Sachin Palav <palavsachin27 at indiatimes.com> wrote:> Hello Friends > > Can someone please let me know how I can backup the ZFS configuration which is stored on the operating system.The configuration of the ZFS pool is stored in the pool itself. That means that the pool is self-contained and can be moved between hosts, while keeping all the configuration (filesystems, volumes and their properties) intact. Moreover, the configuration is copied on all the disks of the pool, so that it can be recreated even after partial loss. Provided, of course, that there was enough redundancy in the pool to recreate the data. zpool configuration is also kept in /etc/zfs/zpool.cache, but that file is a cache and is used merely to provide a hint for the system during boot on what pools are there and where to find them. The authoritative configuration always comes from the disks of the pool in hand. Does that answer you question ? -- Regards, Cyril
I think the answer is that the configuration is hidden and cannot be backed up so that it can be easily restored to a brand spanking new machine with new disks. -- mark Cyril Plisko wrote:> On Thu, Mar 20, 2008 at 7:52 PM, Sachin Palav > <palavsachin27 at indiatimes.com> wrote: > >> Hello Friends >> >> Can someone please let me know how I can backup the ZFS configuration which is stored on the operating system. >> > > The configuration of the ZFS pool is stored in the pool itself. That > means that the pool is self-contained and can be moved between hosts, > while keeping all the configuration (filesystems, volumes and their > properties) intact. Moreover, the configuration is copied on all the > disks of the pool, so that it can be recreated even after partial > loss. Provided, of course, that there was enough redundancy in the > pool to recreate the data. > zpool configuration is also kept in /etc/zfs/zpool.cache, but that > file is a cache and is used merely to provide a hint for the system > during boot on what pools are there and where to find them. The > authoritative configuration always comes from the disks of the pool in > hand. > > Does that answer you question ? > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080320/37ea5156/attachment.html>
On Thu, Mar 20, 2008 at 11:26 PM, Mark A. Carlson <Mark.Carlson at sun.com> wrote:> > I think the answer is that the configuration is hidden > and cannot be backed up so that it can be easily restored > to a brand spanking new machine with new disks.Hm, to which I can add that "zpool history" will keep forever the zpool command that created the pool initially. And it can be easily copy/pasted somewhere else in order to be used as is or as a template for a similar configuration.> > -- mark > > > > Cyril Plisko wrote: > On Thu, Mar 20, 2008 at 7:52 PM, Sachin Palav > <palavsachin27 at indiatimes.com> wrote: > > > Hello Friends > > Can someone please let me know how I can backup the ZFS configuration which > is stored on the operating system. > > The configuration of the ZFS pool is stored in the pool itself. That > means that the pool is self-contained and can be moved between hosts, > while keeping all the configuration (filesystems, volumes and their > properties) intact. Moreover, the configuration is copied on all the > disks of the pool, so that it can be recreated even after partial > loss. Provided, of course, that there was enough redundancy in the > pool to recreate the data. > zpool configuration is also kept in /etc/zfs/zpool.cache, but that > file is a cache and is used merely to provide a hint for the system > during boot on what pools are there and where to find them. The > authoritative configuration always comes from the disks of the pool in > hand. > > Does that answer you question ? > > >-- Regards, Cyril
Hello Cyril, Thursday, March 20, 2008, 9:51:35 PM, you wrote: CP> On Thu, Mar 20, 2008 at 11:26 PM, Mark A. Carlson <Mark.Carlson at sun.com> wrote:>> >> I think the answer is that the configuration is hidden >> and cannot be backed up so that it can be easily restored >> to a brand spanking new machine with new disks.CP> Hm, to which I can add that "zpool history" will keep forever the CP> zpool command that created the pool initially. And it can be easily CP> copy/pasted somewhere else in order to be used as is or as a template CP> for a similar configuration. Will it? I thought zpool history is a cyclic buffer... Then pool configuration is not only raid configuration - it''s also all datasets and their properties like share parameters, etc. -- Best regards, Robert Milkowski mailto:milek at task.gda.pl http://milek.blogspot.com
On Mar 20, 2008, at 3:59 PM, Robert Milkowski wrote:> Hello Cyril, > > Thursday, March 20, 2008, 9:51:35 PM, you wrote: > > CP> On Thu, Mar 20, 2008 at 11:26 PM, Mark A. Carlson > <Mark.Carlson at sun.com> wrote: >>> >>> I think the answer is that the configuration is hidden >>> and cannot be backed up so that it can be easily restored >>> to a brand spanking new machine with new disks. > > CP> Hm, to which I can add that "zpool history" will keep forever the > CP> zpool command that created the pool initially. And it can be > easily > CP> copy/pasted somewhere else in order to be used as is or as a > template > CP> for a similar configuration. > > > Will it? I thought zpool history is a cyclic buffer...It is - except for the initial creation of the pool: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/ common/fs/zfs/spa_history.c#52 http://blogs.sun.com/erickustarz/entry/zpool_history So even with the above, if you add a vdev, slog, or l2arc later on, that can be lost via the history being a ring buffer. There''s a RFE for essentially taking your current ''zpool status'' output and outputting a config (one that could be used to create a brand new pool): 6276640 zpool config> Then pool configuration is not only raid configuration - it''s also all > datasets and their properties like share parameters, etc.Very true. eric
eric kustarz wrote:> So even with the above, if you add a vdev, slog, or l2arc later on, > that can be lost via the history being a ring buffer. There''s a RFE > for essentially taking your current ''zpool status'' output and > outputting a config (one that could be used to create a brand new pool): > 6276640 zpool configI''m surprised there haven''t been more hands raised for this one. It would be very handy for a change management process, setting up DR sites, testing, etc.
On Fri, Mar 21, 2008 at 6:53 PM, Torrey McMahon <tmcmahon2 at yahoo.com> wrote:> eric kustarz wrote: > > So even with the above, if you add a vdev, slog, or l2arc later on, > > that can be lost via the history being a ring buffer. There''s a RFE > > for essentially taking your current ''zpool status'' output and > > outputting a config (one that could be used to create a brand new pool): > > 6276640 zpool config > > I''m surprised there haven''t been more hands raised for this one. It > would be very handy for a change management process, setting up DR > sites, testing, etc.I think that is because of two reasons: 1. It is very simple to [re-]create zpool even today, without any additional instrumentation. It is not a rocket science. 2. Such "zpool config" tool may not be very usable for DR environment. Why ? Chances are that you have FC storage with multipathing (since you care at all about redundency). If so the device names in DR site will be different (based on GUID). So you''ll have to craft you "zpool create" command manually anyway. May be using the "zpool config" as a template, but you have that already with today''s zpool history and zpool status. (Also see #1) It can be really useful for testing purposes, where you are recreating same configuration numerous time. However, in this case you''ve probably scripted it already long time ago. Like you said - that is handy feature, but, IMHO, not more than that. -- Regards, Cyril
Cyril Plisko wrote:> On Fri, Mar 21, 2008 at 6:53 PM, Torrey McMahon <tmcmahon2 at yahoo.com> wrote: > >> eric kustarz wrote: >> > So even with the above, if you add a vdev, slog, or l2arc later on, >> > that can be lost via the history being a ring buffer. There''s a RFE >> > for essentially taking your current ''zpool status'' output and >> > outputting a config (one that could be used to create a brand new pool): >> > 6276640 zpool config >> >> I''m surprised there haven''t been more hands raised for this one. It >> would be very handy for a change management process, setting up DR >> sites, testing, etc. >> > > > I think that is because of two reasons: > > 1. It is very simple to [re-]create zpool even today, without any > additional instrumentation. It is not a rocket science. >Sure but it might be tedious depending on the complexity and what you want to create.> 2. Such "zpool config" tool may not be very usable for DR environment. > Why ? Chances are that you have FC storage with multipathing (since > you care at all about redundency). If so the device names in DR site > will be different (based on GUID). So you''ll have to craft you "zpool > create" command manually anyway. May be using the "zpool config" as a > template, but you have that already with today''s zpool history and > zpool status. (Also see #1) > > It can be really useful for testing purposes, where you are recreating > same configuration numerous time. However, in this case you''ve > probably scripted it already long time ago.I''m with you on the multipathing bit but that can easily be sed/grep/awked to something different. However, I still think the ability to dump the current config is beneficial. zpool history shows you what was done. I wouldn''t want to go through every command to see what the current status is. Also history only tells me what someone typed. It doesn''t tell me what other changes may have occurred.
It''s more than a handy feature. You either have to write down all the ZFS configuration you do, or keep a separate log of it in order to restore a backed up ZFS system to a bare metal replacement today. With this RFE, you could replay the configuration, restore from tape and be back up pretty quickly with the same setup you had. -- mark Cyril Plisko wrote:> On Fri, Mar 21, 2008 at 6:53 PM, Torrey McMahon <tmcmahon2 at yahoo.com> wrote: > >> eric kustarz wrote: >> > So even with the above, if you add a vdev, slog, or l2arc later on, >> > that can be lost via the history being a ring buffer. There''s a RFE >> > for essentially taking your current ''zpool status'' output and >> > outputting a config (one that could be used to create a brand new pool): >> > 6276640 zpool config >> >> I''m surprised there haven''t been more hands raised for this one. It >> would be very handy for a change management process, setting up DR >> sites, testing, etc. >> > > > I think that is because of two reasons: > > 1. It is very simple to [re-]create zpool even today, without any > additional instrumentation. It is not a rocket science. > 2. Such "zpool config" tool may not be very usable for DR environment. > Why ? Chances are that you have FC storage with multipathing (since > you care at all about redundency). If so the device names in DR site > will be different (based on GUID). So you''ll have to craft you "zpool > create" command manually anyway. May be using the "zpool config" as a > template, but you have that already with today''s zpool history and > zpool status. (Also see #1) > > It can be really useful for testing purposes, where you are recreating > same configuration numerous time. However, in this case you''ve > probably scripted it already long time ago. > > > Like you said - that is handy feature, but, IMHO, not more than that. > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080321/8fac7154/attachment.html>
On Fri, Mar 21, 2008 at 8:53 PM, Mark A. Carlson <Mark.Carlson at sun.com> wrote:> > It''s more than a handy feature. You either have to write down all > the ZFS configuration you do, or keep a separate log of it in order > to restore a backed up ZFS system to a bare metal replacement today. > > With this RFE, you could replay the configuration, restore from tape > and be back up pretty quickly with the same setup you had.Mark, you don''t need to convince me - I am absolutely in favor of this feature. However, I have a friendly comment on that "quick" part: don''t you think that it takes significantly more time to load your valuable data from tapes back to disks, than zpool creation, whether manual or automated ? -- Regards, Cyril
On Fri, Mar 21, 2008 at 8:04 PM, Torrey McMahon <tmcmahon2 at yahoo.com> wrote:> I''m with you on the multipathing bit but that can easily be > sed/grep/awked to something different. However, I still think the > ability to dump the current config is beneficial. zpool history shows > you what was done. I wouldn''t want to go through every command to see > what the current status is. Also history only tells me what someone > typed. It doesn''t tell me what other changes may have occurred.Agree. It is a nice feature to have. It is just not a ground-breaking one. FWIW, I feel like we are going in a circle :) -- Regards, Cyril
Absolutely. The issue is: will you even *remember* all the ZFS configuration commands that you have done to your setup when you need to restore it? Thus I need a file I can backup along with (but separate from) the ZFS files. -- mark Cyril Plisko wrote:> On Fri, Mar 21, 2008 at 8:53 PM, Mark A. Carlson <Mark.Carlson at sun.com> wrote: > >> It''s more than a handy feature. You either have to write down all >> the ZFS configuration you do, or keep a separate log of it in order >> to restore a backed up ZFS system to a bare metal replacement today. >> >> With this RFE, you could replay the configuration, restore from tape >> and be back up pretty quickly with the same setup you had. >> > > Mark, > > you don''t need to convince me - I am absolutely in favor of this feature. > However, I have a friendly comment on that "quick" part: don''t you > think that it takes significantly more time to load your valuable data > from tapes back to disks, than zpool creation, whether manual or > automated ? > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080321/24bea948/attachment.html>
> Also history only tells me what someone typed. It doesn''t tell me > what other changes may have occurred.What other changes were you thinking about? eric
If you import a zpool you only get the history from that point forward I believe, so you might not have all the past history, such as how the pool was originally created. Having a way to dump the config for as easy way to recreate is a good feature (as others have mentioned). David On Fri, 2008-03-21 at 13:10 -0700, eric kustarz wrote:> > Also history only tells me what someone typed. It doesn''t tell me > > what other changes may have occurred. > > What other changes were you thinking about? > > eric > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Mar 21, 2008 at 8:10 PM, eric kustarz <eric.kustarz at sun.com> wrote:> > Also history only tells me what someone typed. It doesn''t tell me > > what other changes may have occurred. > > What other changes were you thinking about?I don''t know what Torrey was thinking of, but here''s an example pool: # zpool history History for ''mail'': 2008-01-16.17:22:36 zpool create mail mirror c0t11d0 c0t12d0 2008-01-22.14:30:43 zpool export mail 2008-03-13.18:00:22 zpool import mail # zpool status pool: mail state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mail ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 As you can see, it was created with one pair of device names, and now it shows up as another pair. There''s no way to tell what the new device names are just from the zpool history. In the two-disk case, this isn''t really a problem, but if I had some disk shelves and moved them to a different disk controller, for example, it might be a pain to change all the names to recreate the pool in a failure situation. Especially with FC-like device names that are a mile long. So, I''ll add an endorsement to the idea of ''zpool config'' and request that it display current device names, not the ones the pool was made with. Perhaps it could display all child filesystems and their non-default properties, as well? I don''t know what to do about snapshots, but the logical thing to do would probably be to ignore them. In short, any information about the pool should be reported, but information in the pool omitted. Will
On Mar 21, 2008, at 2:03 PM, David W. Smith wrote:> If you import a zpool you only get the history from that point > forward I > believe, so you might not have all the past history, such as how the > pool was originally created. Having a way to dump the config for > as easy way to recreate is a good feature (as others have mentioned).Actually that isn''t correct. Its the history of the pool - not since the last import. It even records the fact that you destroyed the pool :) : fsh-sole# zpool history History for ''d'': 2008-03-21.15:28:29 zpool create d c0d0 2008-03-21.15:28:32 zpool export d 2008-03-21.15:28:35 zpool import d 2008-03-21.15:28:38 zpool destroy d 2008-03-21.15:28:44 zpool import -D d fsh-sole# I was curious what non-admin induced changes to the pool that Torrey was thinking about. If its important to remember then we can add internal events to track them. eric> > David > > On Fri, 2008-03-21 at 13:10 -0700, eric kustarz wrote: >>> Also history only tells me what someone typed. It doesn''t tell me >>> what other changes may have occurred. >> >> What other changes were you thinking about? >> >> eric >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks everybody for the replyies.. appreciate all your help. Here is my understanding from all of the above: 1. The Configuration of ZFS is on all ZFS disk , so incase of the disk failure there is less chances to loose the configuration for ZFS 2. The is no configuration file for ZFS on the Operating System 3. Currently there no command that prints the entire configuration of ZFS. Please correct me if I am in-correct. Thanks Sachin Palav This message posted from opensolaris.org
On Sat, Mar 22, 2008 at 4:36 PM, Sachin Palav <palavsachin27 at indiatimes.com> wrote:> Thanks everybody for the replyies.. appreciate all your help. > > Here is my understanding from all of the above: > 1. The Configuration of ZFS is on all ZFS disk , so incase of the disk failure there is less chances to loose the configuration for ZFS > 2. The is no configuration file for ZFS on the Operating System > 3. Currently there no command that prints the entire configuration of ZFS.# 3 is not entirely correct - there is no command at this time that generates output directly usable by other command to re-create zpool in hand. There *is* command (zpool status), that prints the entire configuration of zpool.> > Please correct me if I am in-correct. > > > Thanks > Sachin Palav > > > This message posted from opensolaris.org > _______________________________________________ > > > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Regards, Cyril
Sachin Palav <palavsachin27 <at> indiatimes.com> writes:> > 3. Currently there no command that prints the entire configuration of ZFS.Well there _is_ a command to show all (and only) the dataset properties that have been manually "zfs set": $ zfs get -s local all For the pool properties, zpool has no "-s local" option but you can emulate the same behavior with grep: $ zpool get all $POOLNAME | egrep -v '' default$| -$'' These two commands plus zpool status output everything you need to restore a particular ZFS config from scratch. -marc
eric kustarz wrote:> > On Mar 21, 2008, at 2:03 PM, David W. Smith wrote: >> If you import a zpool you only get the history from that point forward I >> believe, so you might not have all the past history, such as how the >> pool was originally created. Having a way to dump the config for >> as easy way to recreate is a good feature (as others have mentioned). > > Actually that isn''t correct. Its the history of the pool - not since > the last import. It even records the fact that you destroyed the pool > :) : > fsh-sole# zpool history > History for ''d'': > 2008-03-21.15:28:29 zpool create d c0d0 > 2008-03-21.15:28:32 zpool export d > 2008-03-21.15:28:35 zpool import d > 2008-03-21.15:28:38 zpool destroy d > 2008-03-21.15:28:44 zpool import -D d > fsh-sole# > > I was curious what non-admin induced changes to the pool that Torrey > was thinking about. If its important to remember then we can add > internal events to track them.There are two facets to this. (At least in my head....) One, is the history of what has happened. A lot of that would be in the zpool history but non-admin related events aren''t in there. (Are they?) I''m thinking things like checksum errors, devices leaving/coming back, etc. You can argue that those events would be in the FMA logs or even /var/adm/messages but if you want a one stop shop it might be a good idea to keep it in the history. Second, and this is what I was thinking when I weighed in is the current state of the pool. There is obviously some overlap between tracking all of the events or activities that have occurred in/to the pool. However, a command to dump the current state that is easily parsed, could be fed to other scripts, used for diagnosis by service folks, etc. would come in handy. Think of something the explorer folks would want to run.
Hi, I don''t have a ZFS box handy right now, but perhaps Sun Explorer would generate something about ZFS/zpools which details the overall configs. Just a thought. On 3/21/08, Sachin Palav <palavsachin27 at indiatimes.com> wrote:> Hello Friends > > Can someone please let me know how I can backup the ZFS configuration which is stored on the operating system. > > Thanks > Sachin Palav > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- _________________________________/ sengork.blogspot.com /????
Hello Cyril, Friday, March 21, 2008, 7:41:37 PM, you wrote: CP> On Fri, Mar 21, 2008 at 8:53 PM, Mark A. Carlson <Mark.Carlson at sun.com> wrote:>> >> It''s more than a handy feature. You either have to write down all >> the ZFS configuration you do, or keep a separate log of it in order >> to restore a backed up ZFS system to a bare metal replacement today. >> >> With this RFE, you could replay the configuration, restore from tape >> and be back up pretty quickly with the same setup you had.CP> Mark, CP> you don''t need to convince me - I am absolutely in favor of this feature. CP> However, I have a friendly comment on that "quick" part: don''t you CP> think that it takes significantly more time to load your valuable data CP> from tapes back to disks, than zpool creation, whether manual or CP> automated ? As I wrote before - it''s not only about RAID config - what if you have hundreds of file systems, with some share{nfs|iscsi|cifs) enabled with specific parameters, then specific file system options, etc. With legacy file systems all you needed was to have a copy of /etc/vfstab and your raid config. -- Best regards, Robert Milkowski mailto:milek at task.gda.pl http://milek.blogspot.com
On Tue, Mar 25, 2008 at 10:11 AM, Robert Milkowski <milek at task.gda.pl> wrote:> Hello Cyril, > > Friday, March 21, 2008, 7:41:37 PM, you wrote: > > > CP> On Fri, Mar 21, 2008 at 8:53 PM, Mark A. Carlson <Mark.Carlson at sun.com> wrote: > >> > >> It''s more than a handy feature. You either have to write down all > >> the ZFS configuration you do, or keep a separate log of it in order > >> to restore a backed up ZFS system to a bare metal replacement today. > >> > >> With this RFE, you could replay the configuration, restore from tape > >> and be back up pretty quickly with the same setup you had. > > CP> Mark, > > CP> you don''t need to convince me - I am absolutely in favor of this feature. > CP> However, I have a friendly comment on that "quick" part: don''t you > CP> think that it takes significantly more time to load your valuable data > CP> from tapes back to disks, than zpool creation, whether manual or > CP> automated ? > > > As I wrote before - it''s not only about RAID config - what if you have > hundreds of file systems, with some share{nfs|iscsi|cifs) enabled with > specific parameters, then specific file system options, etc. > > With legacy file systems all you needed was to have a copy of > /etc/vfstab and your raid config. >Gee, we are all thinking same thing, but it looks like we are arguing... Weird :) -- Regards, Cyril
On Tue, 25 Mar 2008, Robert Milkowski wrote:> > As I wrote before - it''s not only about RAID config - what if you have > hundreds of file systems, with some share{nfs|iscsi|cifs) enabled with > specific parameters, then specific file system options, etc.Some zfs-related "configuration" is done using non-ZFS commands. For example, a filesystem devoted to a user is typically chowned to that user & user''s group. I assume that owner, group, and any ACLs associated with a filesystem would be preserved so that they are part of the pool re-creation commands? When creating ZFS filesystems, the step of creating the pool is separate from the steps of creating the filesystems. Obviously these steps need to either be separate, or separable, so that a similar filesystem layout can be created with different hardware. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Sengor wrote:> Hi, > > I don''t have a ZFS box handy right now, but perhaps Sun Explorer would > generate something about ZFS/zpools which details the overall configs. >Explorer 5.11 collects: zpool list zpool status -v zpool iostat -v zfs get -rHp all ${pool} If you think it should collect something else, then please file a CR. -- richard
Bob Friesenhahn wrote:> On Tue, 25 Mar 2008, Robert Milkowski wrote: >> As I wrote before - it''s not only about RAID config - what if you have >> hundreds of file systems, with some share{nfs|iscsi|cifs) enabled with >> specific parameters, then specific file system options, etc. > > Some zfs-related "configuration" is done using non-ZFS commands. For > example, a filesystem devoted to a user is typically chowned to that > user & user''s group. I assume that owner, group, and any ACLs > associated with a filesystem would be preserved so that they are part > of the pool re-creation commands? > > When creating ZFS filesystems, the step of creating the pool is > separate from the steps of creating the filesystems. Obviously these > steps need to either be separate, or separable, so that a similar > filesystem layout can be created with different hardware.Correct me if I''m not interpreting this discussion properly, but aren''t we discussing reconstruction of the container (zpool/zfs-file-systems and settings), not the data therein? Modes, ACL, extended attributes, and ownership of the data, should all come over with a zfs receive, or the backup recovery of your choice. I believe I could write a trivial shell script to take the listings of: # zpool list pool and # zfs list -r -t filesystem,volume -o all pool to recreate the whole pool, and all the necessary properties. Jon