Brandon High
2011-Jan-04 06:22 UTC
[zfs-discuss] zfs recv failing - "invalid backup stream"
On an snv_151a system I''m trying to do a send of rpool, and works when using -n, but when I actually try to receive it''s failing. scrubs pass without issue, it''s just the recv that fails. # zfs send -R rpool at copy | zfs recv -n -vduF radar/foo would receive full stream of rpool at copy into radar/foo at copy would receive full stream of rpool/ROOT at copy into radar/foo/ROOT at copy cannot create ''radar/foo/ROOT/snv_151a at copy'': parent does not exist # zfs send -R rpool at copy | zfs recv -vduF radar/foo receiving full stream of rpool at copy into radar/foo at copy cannot receive new filesystem stream: invalid backup stream zstreamdump shows this: BEGIN record hdrtype = 2 features = 4 magic = 2f5bacbac creation_time = 0 type = 0 flags = 0x0 toguid = 0 fromguid = 0 toname = rpool at copy nvlist version: 0 tosnap = copy fss = (embedded nvlist) nvlist version: 0 0x4f65ffc5611d9a16 = (embedded nvlist) nvlist version: 0 name = rpool parentfromsnap = 0x0 props = (embedded nvlist) nvlist version: 0 copies = 0x1 compression = 0x2 dedup = 0x2 sync = 0x2 com.sun:auto-snapshot = false org.opensolaris.caiman:install = ready atime = 0x0 (end props) snaps = (embedded nvlist) nvlist version: 0 copy = 0xf7b725c8fb4909cc (end snaps) snapprops = (embedded nvlist) nvlist version: 0 copy = (embedded nvlist) nvlist version: 0 (end copy) (end snapprops) (end 0x4f65ffc5611d9a16) 0x26c5ffeede47502 = (embedded nvlist) nvlist version: 0 name = rpool/ROOT parentfromsnap = 0xf7b725c8fb4909cc props = (embedded nvlist) nvlist version: 0 canmount = 0x0 mountpoint = legacy (end props) snaps = (embedded nvlist) nvlist version: 0 copy = 0x5ac3999f0be01307 (end snaps) snapprops = (embedded nvlist) nvlist version: 0 copy = (embedded nvlist) nvlist version: 0 (end copy) (end snapprops) (end 0x26c5ffeede47502) 0x90cb12b83fc2546a = (embedded nvlist) nvlist version: 0 name = rpool/ROOT/snv_151a parentfromsnap = 0x5ac3999f0be01307 props = (embedded nvlist) nvlist version: 0 org.opensolaris.libbe:uuid ac29b2b5-fe1f-6c55-ab3b-ed3e9e9d53db mountpoint = / canmount = 0x2 org.opensolaris.libbe:policy = static (end props) snaps = (embedded nvlist) nvlist version: 0 copy = 0xec9bfc4eddeadb9c (end snaps) snapprops = (embedded nvlist) nvlist version: 0 copy = (embedded nvlist) nvlist version: 0 (end copy) (end snapprops) (end 0x90cb12b83fc2546a) (end fss) END checksum = 428d2b3e38/3a24e0eff5bb/21ff89e9b44e75/f3ec1d43d884647 BEGIN record hdrtype = 1 features = 4 magic = 2f5bacbac creation_time = 4d218647 type = 2 flags = 0x0 toguid = f7b725c8fb4909cc fromguid = 0 toname = rpool at copy END checksum = a4a5178c744c/a8332b7147dc247c/6d134a0269a1a1dd/f88d9b05376123dd BEGIN record hdrtype = 1 features = 4 magic = 2f5bacbac creation_time = 4d218647 type = 2 flags = 0x0 toguid = 5ac3999f0be01307 fromguid = 0 toname = rpool/ROOT at copy END checksum = 30116b3946/10d44de627105/3aa95a2a944e4ff/6f4e3100a0b41f08 BEGIN record hdrtype = 1 features = 4 magic = 2f5bacbac creation_time = 4d218647 type = 2 flags = 0x0 toguid = ec9bfc4eddeadb9c fromguid = 0 toname = rpool/ROOT/snv_151a at copy END checksum = 166011b53462ace6/33caba98af971c80/effff489aebfb24c/e227a3e8e2169c57 END checksum = 0/0/0/0 SUMMARY: Total DRR_BEGIN records = 4 Total DRR_END records = 5 Total DRR_OBJECT records = 195329 Total DRR_FREEOBJECTS records = 12190 Total DRR_WRITE records = 203628 Total DRR_FREE records = 219160 Total DRR_SPILL records = 0 Total records = 630316 Total write size = 5670396416 (0x151fb6200) Total stream length = 5919048188 (0x160cd81fc) -- Brandon High : bhigh at freaks.com
Brandon High
2011-Jan-04 20:52 UTC
[zfs-discuss] zfs recv failing - "invalid backup stream"
On Mon, Jan 3, 2011 at 10:22 PM, Brandon High <bhigh at freaks.com> wrote:> On an snv_151a system I''m trying to do a send of rpool, and works when > using -n, but when I actually try to receive it''s failing.I''m able to receive all the datasets other than the root rpool, eg: rpool/ROOT and rpool/ROOT/snv_151a work fine. I''m attaching the output of ''zfs send rpool at copy | zstreamdump -Cv > zstreamdump'' since it give more detailed information. -B -- Brandon High : bhigh at freaks.com -------------- next part -------------- A non-text attachment was scrubbed... Name: zstreamdump.zip Type: application/zip Size: 1187 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110104/a31fb538/attachment.zip>
Cindy Swearingen
2011-Jan-05 17:44 UTC
[zfs-discuss] zfs recv failing - "invalid backup stream"
Hi Brandon, I''m not the right person to evaluate your zstreamdump output, but I can''t reproduce this error on my b152 system, which as close as I could get to b151a. See below. Are the rpool and radar pool versions reasonably equivalent? In your follow-up, I think you are saying that rpool at copy is a recursive snapshot and you are able to receive the individual rpool snapshots. You just can''t receive the recursive snapshot. Is this correct? Thanks, Cindy # zfs snapshot -r rpool at 0105 # zpool set listsnapshots=on rpool # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 8.51G 58.4G 93K /rpool rpool at 0105 0 - 93K - rpool/ROOT 4.50G 58.4G 31K legacy rpool/ROOT at 0105 0 - 31K - rpool/ROOT/solaris 4.50G 58.4G 4.45G / rpool/ROOT/solaris at install 45.4M - 4.32G - rpool/ROOT/solaris at 0105 0 - 4.45G - rpool/dump 1.94G 58.4G 1.94G - rpool/dump at 0105 0 - 1.94G - rpool/export 96.5K 58.4G 32K /export rpool/export at 0105 0 - 32K - rpool/export/home 64.5K 58.4G 32K /export/home rpool/export/home at 0105 0 - 32K - rpool/export/home/admin 32.5K 58.4G 32.5K /export/home/admin rpool/export/home/admin at 0105 0 - 32.5K - rpool/swap 2.07G 60.5G 13.8M - rpool/swap at 0105 0 - 13.8M - # zfs send -Rv rpool at 0105 | zfs recv -vduF bkpool/snaps sending from @ to rpool at 0105 receiving full stream of rpool at 0105 into bkpool/snaps at 0105 sending from @ to rpool/ROOT at 0105 received 114KB stream in 2 seconds (57.0KB/sec) receiving full stream of rpool/ROOT at 0105 into bkpool/snaps/ROOT at 0105 sending from @ to rpool/ROOT/solaris at install received 46.3KB stream in 2 seconds (23.1KB/sec) receiving full stream of rpool/ROOT/solaris at install into bkpool/snaps/ROOT/solaris at install sending from @install to rpool/ROOT/solaris at 0105 received 4.49GB stream in 170 seconds (27.0MB/sec) receiving incremental stream of rpool/ROOT/solaris at 0105 into bkpool/snaps/ROOT/solaris at 0105 sending from @ to rpool/dump at 0105 received 254MB stream in 13 seconds (19.5MB/sec) receiving full stream of rpool/dump at 0105 into bkpool/snaps/dump at 0105 sending from @ to rpool/export at 0105 received 1.94GB stream in 45 seconds (44.2MB/sec) receiving full stream of rpool/export at 0105 into bkpool/snaps/export at 0105 sending from @ to rpool/export/home at 0105 received 47.9KB stream in 2 seconds (23.9KB/sec) receiving full stream of rpool/export/home at 0105 into bkpool/snaps/export/home at 0105 sending from @ to rpool/export/home/admin at 0105 received 47.9KB stream in 2 seconds (23.9KB/sec) receiving full stream of rpool/export/home/admin at 0105 into bkpool/snaps/export/home/admin at 0105 sending from @ to rpool/swap at 0105 received 49.9KB stream in 1 seconds (49.9KB/sec) receiving full stream of rpool/swap at 0105 into bkpool/snaps/swap at 0105 received 14.5MB stream in 4 seconds (3.63MB/sec) # zpool set listsnapshots=on bkpool # zfs list -r bkpool NAME USED AVAIL REFER MOUNTPOINT bkpool 8.50G 58.4G 32K /bkpool bkpool/snaps 8.50G 58.4G 93K /bkpool/snaps bkpool/snaps at 0105 0 - 93K - bkpool/snaps/ROOT 4.49G 58.4G 31K legacy bkpool/snaps/ROOT at 0105 0 - 31K - bkpool/snaps/ROOT/solaris 4.49G 58.4G 4.45G / bkpool/snaps/ROOT/solaris at install 44.7M - 4.32G - bkpool/snaps/ROOT/solaris at 0105 0 - 4.45G - bkpool/snaps/dump 1.94G 58.4G 1.94G - bkpool/snaps/dump at 0105 0 - 1.94G - bkpool/snaps/export 96.5K 58.4G 32K /export bkpool/snaps/export at 0105 0 - 32K - bkpool/snaps/export/home 64.5K 58.4G 32K /export/home bkpool/snaps/export/home at 0105 0 - 32K - bkpool/snaps/export/home/admin 32.5K 58.4G 32.5K /export/home/admin bkpool/snaps/export/home/admin at 0105 0 - 32.5K - bkpool/snaps/swap 2.07G 60.5G 13.8M - bkpool/snaps/swap at 0105 0 - 13.8M - On 01/03/11 23:22, Brandon High wrote:> On an snv_151a system I''m trying to do a send of rpool, and works when > using -n, but when I actually try to receive it''s failing. > > scrubs pass without issue, it''s just the recv that fails. > > # zfs send -R rpool at copy | zfs recv -n -vduF radar/foo > would receive full stream of rpool at copy into radar/foo at copy > would receive full stream of rpool/ROOT at copy into radar/foo/ROOT at copy > cannot create ''radar/foo/ROOT/snv_151a at copy'': parent does not exist > # zfs send -R rpool at copy | zfs recv -vduF radar/foo > receiving full stream of rpool at copy into radar/foo at copy > cannot receive new filesystem stream: invalid backup stream > > zstreamdump shows this: > BEGIN record > hdrtype = 2 > features = 4 > magic = 2f5bacbac > creation_time = 0 > type = 0 > flags = 0x0 > toguid = 0 > fromguid = 0 > toname = rpool at copy > nvlist version: 0 > tosnap = copy > fss = (embedded nvlist) > nvlist version: 0 > 0x4f65ffc5611d9a16 = (embedded nvlist) > nvlist version: 0 > name = rpool > parentfromsnap = 0x0 > props = (embedded nvlist) > nvlist version: 0 > copies = 0x1 > compression = 0x2 > dedup = 0x2 > sync = 0x2 > com.sun:auto-snapshot = false > org.opensolaris.caiman:install = ready > atime = 0x0 > (end props) > > snaps = (embedded nvlist) > nvlist version: 0 > copy = 0xf7b725c8fb4909cc > (end snaps) > > snapprops = (embedded nvlist) > nvlist version: 0 > copy = (embedded nvlist) > nvlist version: 0 > (end copy) > > (end snapprops) > > (end 0x4f65ffc5611d9a16) > > 0x26c5ffeede47502 = (embedded nvlist) > nvlist version: 0 > name = rpool/ROOT > parentfromsnap = 0xf7b725c8fb4909cc > props = (embedded nvlist) > nvlist version: 0 > canmount = 0x0 > mountpoint = legacy > (end props) > > snaps = (embedded nvlist) > nvlist version: 0 > copy = 0x5ac3999f0be01307 > (end snaps) > > snapprops = (embedded nvlist) > nvlist version: 0 > copy = (embedded nvlist) > nvlist version: 0 > (end copy) > > (end snapprops) > > (end 0x26c5ffeede47502) > > 0x90cb12b83fc2546a = (embedded nvlist) > nvlist version: 0 > name = rpool/ROOT/snv_151a > parentfromsnap = 0x5ac3999f0be01307 > props = (embedded nvlist) > nvlist version: 0 > org.opensolaris.libbe:uuid > ac29b2b5-fe1f-6c55-ab3b-ed3e9e9d53db > mountpoint = / > canmount = 0x2 > org.opensolaris.libbe:policy = static > (end props) > > snaps = (embedded nvlist) > nvlist version: 0 > copy = 0xec9bfc4eddeadb9c > (end snaps) > > snapprops = (embedded nvlist) > nvlist version: 0 > copy = (embedded nvlist) > nvlist version: 0 > (end copy) > > (end snapprops) > > (end 0x90cb12b83fc2546a) > > (end fss) > > END checksum = 428d2b3e38/3a24e0eff5bb/21ff89e9b44e75/f3ec1d43d884647 > BEGIN record > hdrtype = 1 > features = 4 > magic = 2f5bacbac > creation_time = 4d218647 > type = 2 > flags = 0x0 > toguid = f7b725c8fb4909cc > fromguid = 0 > toname = rpool at copy > END checksum = a4a5178c744c/a8332b7147dc247c/6d134a0269a1a1dd/f88d9b05376123dd > BEGIN record > hdrtype = 1 > features = 4 > magic = 2f5bacbac > creation_time = 4d218647 > type = 2 > flags = 0x0 > toguid = 5ac3999f0be01307 > fromguid = 0 > toname = rpool/ROOT at copy > END checksum = 30116b3946/10d44de627105/3aa95a2a944e4ff/6f4e3100a0b41f08 > BEGIN record > hdrtype = 1 > features = 4 > magic = 2f5bacbac > creation_time = 4d218647 > type = 2 > flags = 0x0 > toguid = ec9bfc4eddeadb9c > fromguid = 0 > toname = rpool/ROOT/snv_151a at copy > END checksum = 166011b53462ace6/33caba98af971c80/effff489aebfb24c/e227a3e8e2169c57 > END checksum = 0/0/0/0 > SUMMARY: > Total DRR_BEGIN records = 4 > Total DRR_END records = 5 > Total DRR_OBJECT records = 195329 > Total DRR_FREEOBJECTS records = 12190 > Total DRR_WRITE records = 203628 > Total DRR_FREE records = 219160 > Total DRR_SPILL records = 0 > Total records = 630316 > Total write size = 5670396416 (0x151fb6200) > Total stream length = 5919048188 (0x160cd81fc) > >
Brandon High
2011-Jan-05 19:21 UTC
[zfs-discuss] zfs recv failing - "invalid backup stream"
On Wed, Jan 5, 2011 at 9:44 AM, Cindy Swearingen <cindy.swearingen at oracle.com> wrote:> In your follow-up, I think you are saying that rpool at copy is a recursive > snapshot and you are able to receive the individual rpool snapshots. You > ?just can''t receive the recursive snapshot. Is this correct?Sorry, I didn''t really explain that very well. Both pools are version 31, and the zfs version is 5. The snapshot has been created recursively via: # zfs snapshot -r rpool at copy # zfs list -t snapshot -r rpool NAME USED AVAIL REFER MOUNTPOINT rpool at copy 0 - 3.21M - rpool/ROOT at copy 0 - 24.5K - rpool/ROOT/snv_151a at copy 1.76M - 5.61G - Trying to send it recursively fails: # zfs send -R rpool at copy | zfs recv -n -vduF radar/foo Sending each of the recursively created snapshots, one at a time, works: # for snap in $( zfs list -t snapshot -r -H -o name rpool ) ; do zfs send $snap | zfs recv -vduF radar/foo ; done receiving full stream of rpool at copy into radar/foo at copy cannot receive new filesystem stream: invalid backup stream receiving full stream of rpool/ROOT at copy into radar/foo/ROOT at copy received 10.2KB stream in 1 seconds (10.2KB/sec) receiving full stream of rpool/ROOT/snv_151a at copy into radar/foo/ROOT/snv_151a at copy received 5.51GB stream in 183 seconds (30.8MB/sec) It looks like only the rpool at copy snapshot or the rpool dataset are bad. All the other datasets seem to work fine. -B -- Brandon High : bhigh at freaks.com
Cindy Swearingen
2011-Jan-05 19:57 UTC
[zfs-discuss] zfs recv failing - "invalid backup stream"
Okay. We are trying again to reproduce this on b151a. In the meantime, you could rule out a problem with zfs send/recv on your system if you could create another non-BE dataset with descendent datasets, create a recursive snapshot, and retry the recursive send/recv operation. Thanks, Cindy On 01/05/11 12:21, Brandon High wrote:> On Wed, Jan 5, 2011 at 9:44 AM, Cindy Swearingen > <cindy.swearingen at oracle.com> wrote: >> In your follow-up, I think you are saying that rpool at copy is a recursive >> snapshot and you are able to receive the individual rpool snapshots. You >> just can''t receive the recursive snapshot. Is this correct? > > Sorry, I didn''t really explain that very well. Both pools are version > 31, and the zfs version is 5. > > The snapshot has been created recursively via: > # zfs snapshot -r rpool at copy > > # zfs list -t snapshot -r rpool > NAME USED AVAIL REFER MOUNTPOINT > rpool at copy 0 - 3.21M - > rpool/ROOT at copy 0 - 24.5K - > rpool/ROOT/snv_151a at copy 1.76M - 5.61G - > > > Trying to send it recursively fails: > # zfs send -R rpool at copy | zfs recv -n -vduF radar/foo > > Sending each of the recursively created snapshots, one at a time, works: > # for snap in $( zfs list -t snapshot -r -H -o name rpool ) ; do zfs > send $snap | zfs recv -vduF radar/foo ; done > receiving full stream of rpool at copy into radar/foo at copy > cannot receive new filesystem stream: invalid backup stream > receiving full stream of rpool/ROOT at copy into radar/foo/ROOT at copy > received 10.2KB stream in 1 seconds (10.2KB/sec) > receiving full stream of rpool/ROOT/snv_151a at copy into > radar/foo/ROOT/snv_151a at copy > received 5.51GB stream in 183 seconds (30.8MB/sec) > > > It looks like only the rpool at copy snapshot or the rpool dataset are > bad. All the other datasets seem to work fine. > > -B >
Brandon High
2011-Jan-05 21:01 UTC
[zfs-discuss] zfs recv failing - "invalid backup stream"
On Wed, Jan 5, 2011 at 11:57 AM, Cindy Swearingen <cindy.swearingen at oracle.com> wrote:> In the meantime, you could rule out a problem with zfs send/recv on your > system if you could create another non-BE dataset with descendent > datasets, create a recursive snapshot, and retry the recursive send/recv > operation.That appears to work fine on this system and on another running 151a. # zfs snapshot -r radar/export/home at copy # zfs send -R radar/export/home at copy | zfs recv -duF radar/bar # echo $? 0 Trying to receive the rpool in a liveusb environment fails too. -B -- Brandon High : bhigh at freaks.com
Cindy Swearingen
2011-Jan-05 21:16 UTC
[zfs-discuss] zfs recv failing - "invalid backup stream"
We installed b151a and couldn''t reproduce a failed receive of a recursive root pool snapshot and also tested on b152 and b155. The original error message isn''t very helpful, but your test below points to a problem in your root pool environment. You might review your zpool history -il rpool output for clues. Thanks, Cindy On 01/05/11 14:01, Brandon High wrote:> On Wed, Jan 5, 2011 at 11:57 AM, Cindy Swearingen > <cindy.swearingen at oracle.com> wrote: >> In the meantime, you could rule out a problem with zfs send/recv on your >> system if you could create another non-BE dataset with descendent >> datasets, create a recursive snapshot, and retry the recursive send/recv >> operation. > > That appears to work fine on this system and on another running 151a. > > # zfs snapshot -r radar/export/home at copy > # zfs send -R radar/export/home at copy | zfs recv -duF radar/bar > # echo $? > 0 > > Trying to receive the rpool in a liveusb environment fails too. > > -B >
Brandon High
2011-Jan-05 23:06 UTC
[zfs-discuss] zfs recv failing - "invalid backup stream"
On Wed, Jan 5, 2011 at 1:16 PM, Cindy Swearingen <cindy.swearingen at oracle.com> wrote:> You might review your zpool history -il rpool output for clues.This isn''t a critical problem, it''s just a point of annoyance since it seems like something that shouldn''t happen. It''s also just a test host that''s led a hard life full of abuse and hardware swaps. I''m not sure where to start looking in the history output. The pool was created with snv_125 and upgraded to snv_133 when that became available. It''s been around for a bit over a year. There were some failed upgrades from snv_134b to snv_151a. The devices shown under ''zpool status'' were different than the actual devices in use, because they''d been moved to a new controller at some point. After fixing that, the upgrade succeeded. It''s possible that something broke down during the many hoops that had to be jumped through however. I can mail you the send stream from rpool at copy if you think it would help to find the problem. There''s no priority to fixing it, other than it being something weird that shouldn''t have happened. -B -- Brandon High : bhigh at freaks.com