I just bought a new set of disks, and want to move my primary data store over to the new disks. I created a new pool fine, and now I''m trying to use zfs send -R | zfs receive to transfer the data. Here''s the error I got: $ pfexec zfs send -Rpv huge at next | pfexec zfs receive -duvF temp sending from @ to huge at sync receiving full stream of huge at sync into temp at sync sending from @sync to huge at zfs-auto-snap:frequent-2010-04-05-22:00 warning: cannot send ''huge at zfs-auto-snap:frequent-2010-04-05-22:00'': no such pool or datasetsending from @zfs-auto-snap:frequent-2010-04-05-22:00 to huge at zfs-auto-snap:frequent-2010-04-06-00:00 warning: cannot send ''huge at zfs-auto-snap:frequent-2010-04-06-00:00'': no such pool or dataset sending from @zfs-auto-snap:frequent-2010-04-06-00:00 to huge at zfs-auto-snap:frequent-2010-04-06-11:45 warning: cannot send ''huge at zfs-auto-snap:frequent-2010-04-06-11:45'': no such pool or dataset sending from @zfs-auto-snap:frequent-2010-04-06-11:45 to huge at next warning: cannot send ''huge at next'': incremental source (@zfs-auto-snap:frequent-2010-04-06-11:45) does not exist cannot receive new filesystem stream: invalid backup stream This process took about 12 hours to do, so it''s frustrating that (apparently) snapshots disappearing causes the replication to fail. Perhaps some sort of locking should be implemented to prevent snapshots that will be needed from being destroyed. In the meantime, I disabled all the zfs/auto-snapshot* services. Should this be enough to prevent the send process from failing again, or are there other steps I should take? Thanks! Will
On Wed, Apr 7, 2010 at 1:32 PM, Will Murnane <will.murnane at gmail.com> wrote:> This process took about 12 hours to do, so it''s frustrating that > (apparently) snapshots disappearing causes the replication to fail. > Perhaps some sort of locking should be implemented to prevent > snapshots that will be needed from being destroyed.What release of opensolaris are you using? Recent versions have the ability to place holds on snapshots, and doing a send will automatically place holds on the snapshots. zfs hold tank/foo/bar at now zfs release tank/foo/bar at now -B -- Brandon High : bhigh at freaks.com
On Wed, Apr 7, 2010 at 17:51, Brandon High <bhigh at freaks.com> wrote:> On Wed, Apr 7, 2010 at 1:32 PM, Will Murnane <will.murnane at gmail.com> wrote: >> This process took about 12 hours to do, so it''s frustrating that >> (apparently) snapshots disappearing causes the replication to fail. >> Perhaps some sort of locking should be implemented to prevent >> snapshots that will be needed from being destroyed. > > What release of opensolaris are you using? Recent versions have the > ability to place holds on snapshots, and doing a send will > automatically place holds on the snapshots.This is on b134: $ pfexec pkg image-update No updates available for this image. There is a "zfs hold" command available, but checking for holds on the snapshot I''m trying to send (I started it again, to see if disabling automatic snapshots helped) doesn''t show anything: $ zfs holds -r huge at next $ echo $? 0 and applying a recursive hold to that snapshot doesn''t seem to hold all its children: $ pfexec zfs hold -r keep huge at next $ zfs holds -r huge at next NAME TAG TIMESTAMP huge/homes/dan at next keep Wed Apr 7 18:02:09 2010 huge at next keep Wed Apr 7 18:02:09 2010 $ zfs list -r -t all huge | grep next huge at next 204K - 2.80T - huge/backups at next 0 - 42.0K - huge/homes at next 0 - 42.9M - huge/homes/cnlohr at next 59.9K - 165G - huge/homes/dan at next 0 - 42.0K - huge/homes/svnback at next 0 - 46.4M - huge/homes/will at next 23.9M - 95.7G - Suggestions? Comments? Will
On Apr 7, 2010, at 5:06 PM, Will Murnane wrote:> This is on b134: > $ pfexec pkg image-update > No updates available for this image. > > There is a "zfs hold" command available, but checking for holds on the > snapshot I''m trying to send (I started it again, to see if disabling > automatic snapshots helped) doesn''t show anything: > $ zfs holds -r huge at next > $ echo $? > 0 > and applying a recursive hold to that snapshot doesn''t seem to hold > all its children: > $ pfexec zfs hold -r keep huge at nextHmm, I made a number of fixes in build 132 related to destroying snapshots while sending replication streams. I''m unable to reproduce the ''zfs holds -r'' issue on build 133. I''ll try build 134, but I''m not aware of any changes in that area. -Chris