On 7/21/2016 03:07, Andriy Gapon wrote:> On 21/07/2016 00:54, Karl Denninger wrote:
>> io_type = ZIO_TYPE_FREE,
>> io_child_type = ZIO_CHILD_VDEV,
>> io_cmd = 0,
>> io_priority = ZIO_PRIORITY_TRIM,
>> io_flags = 789633,
>> io_stage = ZIO_STAGE_VDEV_IO_DONE,
>> io_pipeline = 3080192,
>> io_orig_flags = 525441,
>> io_orig_stage = ZIO_STAGE_OPEN,
>> io_orig_pipeline = 3080192,
>> io_error = 45,
>> vdev_notrim = 1,
>> vdev_queue = {
>> vq_vdev = 0xfffff804d8683000,
>> vq_class = 0xfffff804d86833e8,
>> vq_active_tree = {
>> avl_root = 0xfffff80290a71240,
>> avl_compar = 0xffffffff8220b8a0
<vdev_queue_offset_compare>,
>> avl_offset = 576,
>> avl_numnodes = 64,
>> avl_size = 952
>> },
>> vq_read_offset_tree = {
>> avl_root = 0x0,
>> avl_compar = 0xffffffff8220b8a0
<vdev_queue_offset_compare>,
>> avl_offset = 600,
>> avl_numnodes = 0,
>> avl_size = 952
>> },
>> vq_write_offset_tree = {
>> avl_root = 0x0,
>> avl_compar = 0xffffffff8220b8a0
<vdev_queue_offset_compare>,
>> avl_offset = 600,
>> avl_numnodes = 0,
>> avl_size = 952
>> },
>> },
> Karl,
>
> thank you for the data.
> Was this a freshly imported pool? Or a pool that was not written to
> since the import until shortly before the crash?
>
The crash occurred during a backup script operating, which is (roughly)
the following:
zpool import -N backup (mount the pool to copy to)
iterate over a list of zfs filesystems and...
zfs rename fs at zfs-base fs at zfs-old
zfs snapshot fs at zfs-base
zfs send -RI fs at zfs-old fs at zfs-base | zfs receive -Fudv backup
zfs destroy -vr fs at zfs-old
The first filesystem to be done is the rootfs, that is when it panic'd,
and from the traceback it appears that the Zio's in there are from the
backup volume, so the answer to your question is "yes".
This is a different panic that I used to get on 10.2 (the other one was
always in dounmount) and the former symptom was also not immediately
reproducable; whatever was blowing it up before was in-core, and a
reboot would clear it. This one is not; I (foolishly) believed that the
operation would succeed after the reboot and re-attempted it, only to
get an immediate repeat of the same panic (with an essentially-identical
traceback.)
What allowed the operation to succeed was removing *all* of the
snapshots (other than the base filesystem, of course) from both the
source *and* backup destination zpools, then re-running the operation.
That causes a "base" copy to be taken (zfs snapshot fs at zfs-base and
then
just a straight send of that instead of an incremental), which was
successful.
The only thing that was odd about the zfs filesystem in question was
that as a boot environment that was my roll-forward to 11.0 its
"origin"
was a clone of 10.2 before the install was done, so that snapshot was
present in the zfs snapshot list. However, it had been present for
several days without incident, so I doubt its presence was involved in
the creation of the circumstances leading to the panic.
--
Karl Denninger
karl at denninger.net <mailto:karl at denninger.net>
/The Market Ticker/
/[S/MIME encrypted email preferred]/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2996 bytes
Desc: S/MIME Cryptographic Signature
URL:
<http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20160721/67c1ea60/attachment.bin>