Dennis Clarke
2006-Jun-27 22:30 UTC
[zfs-discuss] This may be a somewhat silly question ...
... but I have to ask. How do I back this up? Here is my definition of a "backup" : (1) I can copy all data and metadata onto some media in a manner that verifies the integrity of the data and metadata written. (1.1) By "verify" I mean that the data written onto the media is read back and compared to the source and accuracy is assured. (2) I can walk away with the media and be able to restore the data onto bare metal with nothing other than Solaris 10 Update 2 ( or Nevada ) CDROM sets and reasonable hardware. I have a copy of the "Solaris ZFS Administration Guide" which is some document numbered 817-2271. 158 pages and well worth printing out I think. Let''s suppose that I have a pile of disks arranged in mirrors and everything seems to be going along swimmingly thus : # zpool status zfs0 pool: zfs0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zfs0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c1t10d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c1t11d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c1t12d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t9d0 ONLINE 0 0 0 c1t9d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c1t13d0 ONLINE 0 0 0 errors: No known data errors # # zfs list NAME USED AVAIL REFER MOUNTPOINT zfs0 95.3G 70.8G 27.5K /export/zfs zfs0/backup 91.2G 70.8G 88.4G /export/zfs/backup zfs0/backup/pasiphae 2.77G 24.2G 2.77G /export/zfs/backup/pasiphae zfs0/lotus 786M 70.8G 786M /opt/lotus zfs0/zone 3.40G 70.8G 24.5K /export/zfs/zone zfs0/zone/common 24.5K 8.00G 24.5K legacy zfs0/zone/domino 24.5K 70.8G 24.5K /opt/zone/domino zfs0/zone/sugar 3.40G 12.6G 3.40G /opt/zone/sugar At this point I attach a tape drive to the machine : # devfsadm -v -C -c tape devfsadm[24247]: verbose: symlink /dev/rmt/0 -> ../../devices/sbus at 1f,0/SUNW,fas at 3,8800000/st at 4,0: . . . devfsadm[24247]: verbose: symlink /dev/rmt/0ubn -> ../../devices/sbus at 1f,0/SUNW,fas at 3,8800000/st at 4,0:ubn # mt -f /dev/rmt/0lbn status DLT4000 tape drive: sense key(0x6)= Unit Attention residual= 0 retries= 0 file no= 0 block no= 0 # I then create a snapshot as per the documentation : # zfs list zfs0 NAME USED AVAIL REFER MOUNTPOINT zfs0 95.3G 70.8G 27.5K /export/zfs # date Tue Jun 27 18:10:36 EDT 2006 # zfs snapshot zfs0 at 27_Jun_2006_1810Hrs # zfs list NAME USED AVAIL REFER MOUNTPOINT zfs0 95.3G 70.8G 27.5K /export/zfs zfs0 at 27_Jun_2006_1810Hrs 0 - 27.5K - zfs0/backup 91.2G 70.8G 88.4G /export/zfs/backup zfs0/backup/pasiphae 2.77G 24.2G 2.77G /export/zfs/backup/pasiphae zfs0/lotus 786M 70.8G 786M /opt/lotus zfs0/zone 3.40G 70.8G 24.5K /export/zfs/zone zfs0/zone/common 24.5K 8.00G 24.5K legacy zfs0/zone/domino 24.5K 70.8G 24.5K /opt/zone/domino zfs0/zone/sugar 3.40G 12.6G 3.40G /opt/zone/sugar # And then I send that snapshot to tape : # zfs send zfs0 at 27_Jun_2006_1810Hrs > /dev/rmt/0mbn # That command ran for maybe 15 seconds. I seriously doubt that 95GB of data was written to tape and verified in that time although I''d like to see the device and bus that can do it! :-) I''ll destroy that snapshot and try something else here : # zfs destroy zfs0 at 27_Jun_2006_1810Hrs Now perhaps the mystery is to try a different ZFS filesystem : # date Tue Jun 27 18:17:33 EDT 2006 # zfs snapshot zfs0/lotus at 27_Jun_2006_18:17Hrs I''ll check the tape drive that did "something" above although I have no idea what. # mt -f /dev/rmt/0mbn status DLT4000 tape drive: sense key(0x0)= No Additional Sense residual= 0 retries= 0 file no= 1 block no= 0 # Now I will "send" that stream to the tape : # zfs send zfs0/lotus at 27_Jun_2006_18:17Hrs > /dev/rmt/0mbn The tape is now doing "something" again and I don''t know what. I would like to think that when it is down I can walk to a totally new machine and restore the ZFS filesystem zfs0/lotus with no issue but I don''t see a verify step here anywhere and I really have no idea what will happen when I hit the end of that tape. I am very bothered that my 95GB zfs0 did not go to tape and I don''t know why not. I think that my itty bitty 786MB zfs0/lotus is actually going to tape right now ( lights are flashing ) but I have no feedback and no way to tell really. Pages 90 and 91 of the manual say that I am doing everything correctly but I have a less than satisfied feeling. Am I missing something here? [1] Dennis [1] I am fully prepared for RTFM and outright snickering if deserved :-)
eric kustarz
2006-Jun-28 00:53 UTC
[zfs-discuss] This may be a somewhat silly question ...
>I am very bothered that my 95GB zfs0 did not go to tape and I don''t know why >not. I think that my itty bitty 786MB zfs0/lotus is actually going to tape >right now ( lights are flashing ) but I have no feedback and no way to tell >really. Pages 90 and 91 of the manual say that I am doing everything >correctly but I have a less than satisfied feeling. > >Am I missing something here? [1] > >Its actually a good question... Remeber that a snapshot is of a filesystem, not a pool - even if the filesystem you''re taking a snapshot of is the root filesystem of the pool. The snapshot is just the contents of one particular filesystem - not the contents the filesystem and all its descendants. So i imagine in your case that the root filesystem doesn''t have much data actually in it, but its descendants do. Matt recently introduced ''snapshot -r'' to solve part of your problem with: 6373978 want to take lots of snapshots quickly (''zfs snapshot -r'') This lets you take a snapshot of your a filesystem and all its descendants - its still one snapshot per filesystem, but its all done in one transaction group. Here''s an example: fsh-hake# zfs list NAME USED AVAIL REFER MOUNTPOINT swim 167K 7.81G 27.5K /swim swim/ball 49K 7.81G 24.5K /swim/ball swim/ball/beach 24.5K 7.81G 24.5K /swim/ball/beach swim/ming 24.5K 7.81G 24.5K /swim/ming fsh-hake# zfs snapshot -r swim at today fsh-hake# zfs list NAME USED AVAIL REFER MOUNTPOINT swim 232K 7.81G 27.5K /swim swim at today 0 - 27.5K - swim/ball 50K 7.81G 25.5K /swim/ball swim/ball at today 0 - 25.5K - swim/ball/beach 24.5K 7.81G 24.5K /swim/ball/beach swim/ball/beach at today 0 - 24.5K - swim/ming 24.5K 7.81G 24.5K /swim/ming swim/ming at today 0 - 24.5K - fsh-hake# What''s needed after that is a way (such as a script) to ''zfs send'' all the snapshot to the appropiate place. eric> >Dennis > >[1] I am fully prepared for RTFM and outright snickering if deserved :-) > > >_______________________________________________ >zfs-discuss mailing list >zfs-discuss at opensolaris.org >http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
Darren J Moffat
2006-Jun-28 09:09 UTC
[zfs-discuss] This may be a somewhat silly question ...
eric kustarz wrote:> What''s needed after that is a way (such as a script) to ''zfs send'' all > the snapshot to the appropiate place.And very importantly you need a way to preserve all of the options set on the ZFS data set, otherwise IMO zfs send is no better than using an archiver that uses POSIX interfaces (other than possible performance). -- Darren J Moffat
Cindy Swearingen
2006-Jun-28 17:50 UTC
[zfs-discuss] This may be a somewhat silly question ...
Dennis, You are absolutely correct that the doc needs a step to verify that the backup occurred. I''ll work on getting this step added to the admin guide ASAP. Thanks for feedback... Cindy Dennis Clarke wrote:> > Am I missing something here? [1] > > > Dennis > > [1] I am fully prepared for RTFM and outright snickering if deserved :-) > >
Dennis Clarke
2006-Jun-28 19:00 UTC
[zfs-discuss] This may be a somewhat silly question ...
> > Dennis, > > You are absolutely correct that the doc needs a step to verify > that the backup occurred. > > I''ll work on getting this step added to the admin guide ASAP. >Hey, I''m sorry that I triggered more work for you. Never meant to do that. I was just a little lost as to how to get a good high quality backup. Let me know if I can help in any way. Dennis
Matthew Ahrens
2006-Jul-27 19:59 UTC
[zfs-discuss] This may be a somewhat silly question ...
On Tue, Jun 27, 2006 at 06:30:46PM -0400, Dennis Clarke wrote:> > ... but I have to ask. > > How do I back this up?The following two RFEs would help you out enormously: 6421958 want recursive zfs send (''zfs send -r'') 6421959 want zfs send to preserve properties (''zfs send -p'') As far as RFEs go, these are pretty high priority... --matt