amber ~ # zpool list data NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT data 930G 295G 635G 31% 1.00x ONLINE - amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata cannot receive new filesystem stream: destination ''ezdata'' exists must specify -F to overwrite it amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata/data cannot receive: specified fs (ezdata/data) does not exist -- This message posted from opensolaris.org
> amber ~ # zpool list data > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > data 930G 295G 635G 31% 1.00x ONLINE - > > amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata > cannot receive new filesystem stream: destination ''ezdata'' exists > must specify -F to overwrite it > > amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata/data > cannot receive: specified fs (ezdata/data) does not existYou''re confused because one situation says "cannot receive ... exists" and the other situation says "cannot receive ... does not exist" Right? Why do you show us the zpool list? Because of the zpool list, I am not sure I understand what you''re asking.
Sorry if my question was confused. Yes I''m wondering about the catch22 resulting of the two errors : it means we are not able to send/receive a pool''s root filesystem without using -F. The zpool list was just meant to say it was a whole pool... Bruno -- This message posted from opensolaris.org
actually I succeded using : # zfs create ezdata/data # zfs send -RD data at prededup | zfs recv -duF ezdata/data I still have to check the result, though -- This message posted from opensolaris.org
I have additional problem, whicxh worries me. I tried different ways of sending/receiving my data pool. I took some snapshots, sent them, then destroyed them, using destroy -r. AFAIK this shoud not have affected the filesystem''s _current_ state or am I mislead ? Now I succeeded to send a snapshot and receive it like this : # zfs create ezdata/data # zfs send -RD data at prededup | zfs recv -duF ezdata/data Im seeing some older versions on the source dataset, newer versions on the snapshot and the copied filesystems. Any idea how this can have happened ? amber ~ # ll -d /data/.zfs/snapshot/prededup/postgres84_64 drwxr-xr-x 2 root root 2 Nov 29 18:20 /data/.zfs/snapshot/prededup/postgres84_64 amber ~ # ll -d /data/postgres84_64 drwx------ 12 postgres postgres 21 Feb 6 22:03 /data/postgres84_64 amber ~ # ll -d /ezdata/data/postgres84_64 drwxr-xr-x 2 root root 2 Nov 29 18:20 /ezdata/data/postgres84_64 -- This message posted from opensolaris.org