This may be a bit poorly thought through but in this case I don''t really know enough to really think it through. My back ground is linux... there I used a tool called rsnapshot which used rsync and some hardlink magic to create versioned backups. But take very little space. By versioned I don''t mean as in version control but just copies of files as they change. It worked by running what they called an hourly backup then rotating that out to daily weekly etc. even the hourly term only really meant a specific kind of run... not necessarily actually done hourly. But the punch line was you always had a full backup under the directory created by hourly All the backups used hard links to just create a name, no duplicate data, in the rotated directory unless the file had changed. That''s where the space saving came in. I''m trying to see now how to do something similar with snapshots. However I don''t really understand how Copy On Write works, even though I read some web pages about it. So I wanted to hear some examples of how zfs users would handle something like that. The end goal being that user can go into these snapshots or whatever else it may require and retrieve the same file for a day ago or a week ago, or month etc if desired. So there is a running copy of any changes going back in time. And all done in as little disk space as possible.
Cake. See below... Harry Putnam wrote:> This may be a bit poorly thought through but in this case I don''t > really know enough to really think it through. > > My back ground is linux... there I used a tool called rsnapshot which > used rsync and some hardlink magic to create versioned backups. But > take very little space. > > By versioned I don''t mean as in version control but just copies of > files as they change. > > It worked by running what they called an hourly backup then rotating > that out to daily weekly etc. even the hourly term only really meant > a specific kind of run... not necessarily actually done hourly. > > But the punch line was you always had a full backup under the > directory created by hourly > > All the backups used hard links to just create a name, no duplicate > data, in the rotated directory unless the file had changed. That''s > where the space saving came in. > > I''m trying to see now how to do something similar with snapshots. > > However I don''t really understand how Copy On Write works, even though > I read some web pages about it. > > So I wanted to hear some examples of how zfs users would handle > something like that. >Administration -> Time Slider> The end goal being that user can go into these snapshots or whatever > else it may require and retrieve the same file for a day ago or a week > ago, or month etc if desired. > > So there is a running copy of any changes going back in time. And all > done in as little disk space as possible. >In Nautilus, the file browser, there is a button called "Restore" which will show you the views in past time. -- richard
Harry Putnam wrote:> This may be a bit poorly thought through but in this case I don''t > really know enough to really think it through. > > My back ground is linux... there I used a tool called rsnapshot which > used rsync and some hardlink magic to create versioned backups. But > take very little space. > > By versioned I don''t mean as in version control but just copies of > files as they change. > > It worked by running what they called an hourly backup then rotating > that out to daily weekly etc. even the hourly term only really meant > a specific kind of run... not necessarily actually done hourly. > > But the punch line was you always had a full backup under the > directory created by hourly > > All the backups used hard links to just create a name, no duplicate > data, in the rotated directory unless the file had changed. That''s > where the space saving came in. > > I''m trying to see now how to do something similar with snapshots. > > However I don''t really understand how Copy On Write works, even though > I read some web pages about it. > > So I wanted to hear some examples of how zfs users would handle > something like that. > > The end goal being that user can go into these snapshots or whatever > else it may require and retrieve the same file for a day ago or a week > ago, or month etc if desired. > > So there is a running copy of any changes going back in time. And all > done in as little disk space as possible. >Recent Nevada and OpenSolaris do this out of the box, and the Gnome nautilus filemanager is integrated with this, providing a timeslider facility to slide back in time and see what was in a directory in the past. http://wikis.sun.com/display/OpenSolarisInfo/How+to+Manage+the+Automatic+ZFS+Snapshot+Service Copy on Write means that only blocks that have been written since a snapshot occupy extra space. That could achieve better savings than symlinks. For example, if you add something onto the end of a large logfile, only the changed blocks at the end will be in separate physical disk blocks. All the rest of the blocks in the file which weren''t written to since the snapshot will be shared. (This requires that the app appends to the file, and doesn''t rewrite the whole file just to add something on the end.) -- Andrew
Richard Elling <richard.elling at gmail.com> writes:> Cake. See below...At high risk of sounding very stupid... I guess Cake went right over my head. Unless its short for `piece of cake''. Ok now you now you know how deep seated that vein of dimness really is. [...]> Administration -> Time Slider > >> The end goal being that user can go into these snapshots or whatever >> else it may require and retrieve the same file for a day ago or a week >> ago, or month etc if desired. >> >> So there is a running copy of any changes going back in time. And all >> done in as little disk space as possible. >> > > In Nautilus, the file browser, there is a button called "Restore" > which will show > you the views in past time.I see now.. and now I remember reading a bit about it too. But, its not really like what I was talking about... or at least is not as fine grained unless the Administration -> Time Slider app doesn''t really show how fine it can go. If I wanted to be able to go back in time with just /etc for example. Is it also possible? Andrew Gabriel <agabriel at opensolaris.org> writes: [...]> http://wikis.sun.com/display/OpenSolarisInfo/How+to+Manage+the+Automatic+ZFS+Snapshot+ServiceThanks for the brief summary of Copy On Write. And that URL above is quite a good page. Very clear what is happening.
Harry Putnam wrote:> Richard Elling <richard.elling at gmail.com> writes: > > >> Cake. See below... >> > > At high risk of sounding very stupid... I guess Cake went right over > my head. Unless its short for `piece of cake''. Ok now you now you > know how deep seated that vein of dimness really is. > [...] >and eat it, too... :-)>> Administration -> Time Slider >> >> >>> The end goal being that user can go into these snapshots or whatever >>> else it may require and retrieve the same file for a day ago or a week >>> ago, or month etc if desired. >>> >>> So there is a running copy of any changes going back in time. And all >>> done in as little disk space as possible. >>> >>> >> In Nautilus, the file browser, there is a button called "Restore" >> which will show >> you the views in past time. >> > > I see now.. and now I remember reading a bit about it too. But, its > not really like what I was talking about... or at least is not as fine > grained unless the Administration -> Time Slider app doesn''t really > show how fine it can go. >It can go very fine, though you''ll need to set the parameters yourself, if you want to use different settings. A few weeks ago, I posted a way to see the settings, which the time slider admin tool won''t show. There is a diminishing return for exposing such complexity, but you might try an RFE if you feel strongly about it. http://opensolaris.org/jive/thread.jspa?messageID=353761> If I wanted to be able to go back in time with just /etc for example. > Is it also possible? >Possible? Yes, to some degree. But that is probably not worth the complexity involved. The contents of /etc just doesn''t change very often. Snapshots are done on a per-file system basis and /etc doesn''t really warrant a separate file system -- and I''m not sure you can separate /etc from /, since it is required early in the boot sequence. -- richard
Richard Elling <richard.elling at gmail.com> writes:> It can go very fine, though you''ll need to set the parameters yourself, > if you want to use different settings. A few weeks ago, I posted a way > to see the settings, which the time slider admin tool won''t show. There > is a diminishing return for exposing such complexity, but you might try > an RFE if you feel strongly about it. > http://opensolaris.org/jive/thread.jspa?messageID=353761I meant in terms of per directory not frequency.> >> If I wanted to be able to go back in time with just /etc for example. >> Is it also possible? >> > > Possible? Yes, to some degree. But that is probably not worth the > complexity > involved. The contents of /etc just doesn''t change very > often. Snapshots are > done on a per-file system basis and /etc doesn''t really warrant a > separate file > system -- and I''m not sure you can separate /etc from /, since it is > required > early in the boot sequence.The part about etc not changing may be true after you''ve established a good setup. But while getting there, I''ve always found it a good choice for frequent backup. That has been on linux, not solaris. Maybe Osol doesn''t use etc as much as linux systems. But then other directories may need more frequent backup than the filesystem they are on. I guess one could create a filesystem for such a directory. And I think I may be getting confused between filesystem snapshots and BE snapshots. The etc directory must be included in a BE. So would BE snapshots cover all of rpool? Or just `/'' or are they even different. The whole scheme I see with gnu/bin/df is kind of confusing too. Its not really even clear if /etc is part of rpool. zfs list -r rpool doesn''t show etc, just ''/''so I guess not.
you need zfs list -t snapshot by default, snapshots aren''t shown in zfs list anymore, hence the -t option On Mon, Mar 30, 2009 at 11:41 AM, Harry Putnam <reader at newsguy.com> wrote:> Richard Elling <richard.elling at gmail.com> writes: > >> It can go very fine, though you''ll need to set the parameters yourself, >> if you want to use different settings. ?A few weeks ago, I posted a way >> to see the settings, which the time slider admin tool won''t show. ?There >> is a diminishing return for exposing such complexity, but you might try >> an RFE if you feel strongly about it. >> http://opensolaris.org/jive/thread.jspa?messageID=353761 > > I meant in terms of per directory not frequency. > > >> >>> If I wanted to be able to go back in time with just /etc for example. >>> Is it also possible? >>> >> >> Possible? Yes, to some degree. ?But that is probably not worth the >> complexity >> involved. The contents of /etc just doesn''t change very >> often. Snapshots are >> done on a per-file system basis and /etc doesn''t really warrant a >> separate file >> system -- and I''m not sure you can separate /etc from /, since it is >> required >> early in the boot sequence. > > The part about etc not changing may be true after you''ve established a > good setup. ?But while getting there, I''ve always found it a good > choice for frequent backup. ?That has been on linux, not solaris. > > Maybe Osol doesn''t use etc as much as linux systems. > > But then other directories may need more frequent backup than the > filesystem they are on. ?I guess one could create a filesystem for > such a directory. > > And I think I may be getting confused between filesystem snapshots and > BE snapshots. The etc directory must be included in a BE. ?So would BE > snapshots cover all of rpool? ?Or just `/'' or are they even different. > > The whole scheme I see with gnu/bin/df is kind of confusing too. ?Its > not really even clear if /etc is part of rpool. > > zfs list -r rpool doesn''t show etc, just ''/''so I guess not. > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
> you need zfs list -t snapshot > > by default, snapshots aren''t shown in zfs list anymore, hence the -t option >Yikes, I''ve got dozens of the things... I monkeyed around a bit with timeslider but thought I canceled out whatever settings I''d messed with. Frequent and hourly both are way to often for most of my data. I think I''ve kind of painted myself into a corner. I apparently turned on timeslider... but one pool had some kind of corruption problem that I fixed by zpool destroy entire pool. But I keep getting errors from timeslider that would put frequent and hourly into maintenance mode. Which meant I couldn''t do anything with the timeslider applet. It seems a little light on robustness.. not able to be used if there is any problem. Finally I disabled both frequent and hourly... and of course then the timeslider I unusable because the services are off. I tried restarting them again after getting the offending pool rebuilt involving at least 2 reboots. But now restarting them, and immediately they go to maintenance mode. And of course the timeslider applet is useless. Looking at the log output its the same as what I posted earlier... in a different thread. www.jtan.com/~reader/slider/disp.cgi It appears to be related to not being able to open a crontab file. Doesn''t say which but I see several in /var/pool/cron/crontabs ls -l /var/spool/cron/crontabs total 9 -rw------- 1 root sys 1004 2008-11-19 18:13 adm -r-------- 1 root root 1365 2008-11-19 18:30 lp -rw------- 1 root root 1241 2009-03-30 17:15 root -rw------- 1 root sys 1122 2008-11-19 18:33 sys -rw------- 1 root daemon 394 2009-03-30 18:06 zfssnap So I''m not sure what the problem is.
There is a bug where the automatic snapshot service dies if there are multiple boot environments. Do you have these? I think you can check with Update Manager. On Mon, Mar 30, 2009 at 7:20 PM, Harry Putnam <reader at newsguy.com> wrote:>> you need zfs list -t snapshot >> >> by default, snapshots aren''t shown in zfs list anymore, hence the -t option >> > > Yikes, I''ve got dozens of the things... I monkeyed around a bit with > timeslider but thought I canceled out whatever settings I''d messed > with. > > Frequent and hourly both are way to often for most of my data. > > I think I''ve kind of painted myself into a corner. ?I apparently > turned on timeslider... but one pool had some kind of corruption > problem that I fixed by zpool destroy entire pool. > > But I keep getting errors from timeslider that would put frequent and > hourly into maintenance mode. ?Which meant I couldn''t do anything with > the timeslider applet. ?It seems a little light on robustness.. not > able to be used if there is any problem. > > Finally I disabled both frequent and hourly... and of course then the > timeslider ?I unusable because the services are off. > > I tried restarting them again after getting the offending pool rebuilt > involving at least 2 reboots. ?But now restarting them, and > immediately they go to maintenance mode. ?And of course the timeslider > applet is useless. > > Looking at the log output its the same as what I posted earlier... in > a different thread. > > ? www.jtan.com/~reader/slider/disp.cgi > > It appears to be related to not being able to open a crontab file. > > Doesn''t say which but I see several in /var/pool/cron/crontabs > > ?ls -l /var/spool/cron/crontabs > total 9 > -rw------- 1 root sys ? ?1004 2008-11-19 18:13 adm > -r-------- 1 root root ? 1365 2008-11-19 18:30 lp > -rw------- 1 root root ? 1241 2009-03-30 17:15 root > -rw------- 1 root sys ? ?1122 2008-11-19 18:33 sys > -rw------- 1 root daemon ?394 2009-03-30 18:06 zfssnap > > So I''m not sure what the problem is. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Blake <blake.irvin at gmail.com> writes:> There is a bug where the automatic snapshot service dies if there are > multiple boot environments. Do you have these? I think you can check > with Update Manager.Yeah I have them but due to another bug beadm can''t destroy/remove any. Update manager/Be manager can''t delete them either. I''ve managed to really screwup this installation apparently. I can''t even use the update manager now. It fails complaining of a network problem that doesn''t exist. When I press update all the dialog opens and trails back and forth a bit then shows: Preparing... Ensuring Package Manager is up to date... Error: Please check the network connection. I can access the repo just find with firefox. Feeding it http://pkg.opensolaris.org/dev/ And it comes right up. Well, the search page comes up actually: http://pkg.opensolaris.org/dev/en/index.shtml