Jason Pfingstmann
2009-Sep-02 07:47 UTC
[zfs-discuss] zfs send <pool>/<volume>@<snapshot> incomplete
I have been doing some tests with ZFS with volumes shared through iSCSI and connected to a windows machine. I created several snapshots and cloned several of them to attach and test. My zfs list and zfs list -t snapshot looks like this at the moment (only relevant parts): datapool 837G 76.7G 21K /datapool datapool/data 833G 76.7G 833G /datapool/data datapool/iold1 510M 76.7G 550M - datapool/iscsi1124clone 2.52M 76.7G 501M - datapool/iscsitest 3.44G 79.6G 510M - datapool/iold1 at 1136 345K - 498M - datapool/iscsi1124clone at now 0 - 501M - datapool/iscsitest at 1124 102K - 297M - datapool/iscsitest at 1125 140K - 331M - datapool/iscsitest at 1126 167K - 451M - To give you some background on the kind of things I''ve been doing - the current "live" iSCSI target is iscsi1124clone - it was 3 GB, I set it''s size to 6 GB and reattached it to the server to have it immediately see the additional space, I then converted to dynamic volume and expanded to make it use all 6 GB - worked wonderfully! Now I was thinking - I would like to clean up the list, so I should "promote" the clone and remove all the old snapshots and tests - apparently not... I was reading that you can''t promote volumes. I have about 500 MB of used space on my 6 GB volume. I figured I could do a zfs send backup, wipe all the test zfs volumes and snapshots, then do a zfs recv. Well the zfs send only creates a 3.3 MB file... not the entire thing. I used this for the zfs send: zfs send -Rv datapool/iscsi1124clone at now > /datapool/data/Temp/test.zfs What am I doing wrong? Why wont the whole thing copy? I''ve tried an incremental from origin to @now, but it still doesn''t work right... Thanks for all your help. -Jason -- This message posted from opensolaris.org