I replayed a bunch of filesystems in order to get dedupe benefits. Only thing is a couple of them are rolled back to November or so (and I didn''t notice before destroy''ing the old copy). I used something like: zfs snapshot pool/fs at dd zfs send -Rp pool/fs at dd | zfs recv -d pool/fs2 (after done...) zfs destroy pool/fs zfs rename pool/fs2/fs pool/fs What are the failure modes for "partial" send/recv? I''ve experienced full rollbacks when the process is canceled. But my case feels like the stream became truncated and the filesystem ended up partially built? Is this an expected result? It does seem like ZFS needs a way to do this kind of operation atomically in the future, but I''m more interested in understanding if there''s something I did wrong using the current tools, or if there are bugs. I was running b130 to do these operations, and it seems like previous attempts in b128 and b129 completed successfully. mike
Michael Herf wrote:> I replayed a bunch of filesystems in order to get dedupe benefits. > Only thing is a couple of them are rolled back to November or so (and > I didn''t notice before destroy''ing the old copy). > > I used something like: > > zfs snapshot pool/fs at dd > zfs send -Rp pool/fs at dd | zfs recv -d pool/fs2 > (after done...) > zfs destroy pool/fs > zfs rename pool/fs2/fs pool/fs > > What are the failure modes for "partial" send/recv? I''ve experienced > full rollbacks when the process is canceled. > But my case feels like the stream became truncated and the filesystem > ended up partially built? Is this an expected result? >Individual receives should be atomic.> It does seem like ZFS needs a way to do this kind of operation > atomically in the future, but I''m more interested in understanding if > there''s something I did wrong using the current tools, or if there are > bugs. >Was there any error output? I always use -v on recursive receives to track progress. -- Ian.
I didn''t use -v, so I don''t know. I just waited until the process exited, assuming it would succeed or fail. The sizes looked equivalent, so I went ahead with the "destroy, rename." For the jobs a couple weeks ago, I turned off the snapshot service. For this one, I probably left it on. Anything possible there? The only other thing is that I did "zfs rollback" for a totally unrelated filesystem in the pool, but I have no idea if this could have affected it. (I''ve verified that I got the right one with "zpool history".) mike On Tue, Jan 5, 2010 at 2:24 AM, Ian Collins <ian at ianshome.com> wrote:> Michael Herf wrote: >> >> I replayed a bunch of filesystems in order to get dedupe benefits. >> Only thing is a couple of them are rolled back to November or so (and >> I didn''t notice before destroy''ing the old copy). >> >> I used something like: >> >> zfs snapshot pool/fs at dd >> zfs send -Rp pool/fs at dd | zfs recv -d pool/fs2 >> (after done...) >> zfs destroy pool/fs >> zfs rename pool/fs2/fs pool/fs >> >> What are the failure modes for "partial" send/recv? I''ve experienced >> full rollbacks when the process is canceled. >> But my case feels like the stream became truncated and the filesystem >> ended up partially built? Is this an expected result? >> > > Individual receives should be atomic. > >> It does seem like ZFS needs a way to do this kind of operation >> atomically in the future, but I''m more interested in understanding if >> there''s something I did wrong using the current tools, or if there are >> bugs. >> > > Was there any error output? ?I always use -v on recursive receives to track > progress. > > -- > Ian. > >