Greg Harris
2009-Jan-30 14:50 UTC
[Xen-devel] Kernel Panic in xen-blkfront.c:blkif_queue_request under 2.6.28
Hi,
I''ve run into several panics around the Xen block frontend driver in
2.6.28. It appears that when the kernel issues block queue requests in
blkif_queue_request the number of segments exceeds
BLKIF_MAX_SEGMENTS_PER_REQUEST triggering the panic despite the call to set the
maximum number of segments during the queue initialization (xlvdb_init_blk_queue
calls blk_queue_max_phys_segments and blk_queue_max_hw_segments with
BLKIF_MAX_SEGMENTS_PER_REQUEST as parameters).
Attached are two panics:
kernel BUG at drivers/block/xen-blkfront.c:243!
invalid opcode: 0000 [#1] SMP
last sysfs file: /sys/block/xvda/dev
CPU 0
Modules linked in:
Pid: 0, comm: swapper Not tainted 2.6.28-metacarta-appliance-1 #2
RIP: e030:[<ffffffff804077c0>] [<ffffffff804077c0>]
do_blkif_request+0x2f0/0x380
RSP: e02b:ffffffff80865dd8 EFLAGS: 00010046
RAX: 0000000000000000 RBX: ffff880366ee33c0 RCX: ffff880366ee33c0
RDX: ffff880366f15d90 RSI: 000000000000000a RDI: 0000000000000303
RBP: ffff88039d78b190 R08: 0000000000001818 R09: ffff88038fb7a9e0
R10: 0000000000000004 R11: 000000000000001a R12: 0000000000000303
R13: 0000000000000001 R14: ffff880366f15da0 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffffffff807a1980(0000) knlGS:0000000000000000
CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00000000f7f54444 CR3: 00000003977e5000 CR4: 0000000000002620
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process swapper (pid: 0, threadinfo ffffffff807a6000, task ffffffff806f0360)
Stack:
000000000000004c ffff88038fb7a9e0 ffff88039a5f4000 ffff880366f123b8
0000000680298eec 000000000000000f ffff88039a5f4000 0000000066edc808
ffff880366ee33c0 ffff88038fb7aa00 ffffffff00000001 ffff88038fb7a9e0
Call Trace:
<IRQ> <0> [<ffffffff8036fa45>] ?
blk_invoke_request_fn+0xa5/0x110
[<ffffffff80407868>] ? kick_pending_request_queues+0x18/0x30
[<ffffffff80407a17>] ? blkif_interrupt+0x197/0x1e0
[<ffffffff8026cc59>] ? handle_IRQ_event+0x39/0x80
[<ffffffff8026f016>] ? handle_level_irq+0x96/0x120
[<ffffffff802140d5>] ? do_IRQ+0x85/0x110
[<ffffffff803c8315>] ? xen_evtchn_do_upcall+0xe5/0x130
[<ffffffff802461f7>] ? __do_softirq+0xe7/0x180
[<ffffffff8059f3ee>] ? xen_do_hypervisor_callback+0x1e/0x30
<EOI> <0> [<ffffffff802093aa>] ? _stext+0x3aa/0x1000
[<ffffffff802093aa>] ? _stext+0x3aa/0x1000
[<ffffffff8020de8c>] ? xen_safe_halt+0xc/0x20
[<ffffffff8020c1fa>] ? xen_idle+0x2a/0x50
[<ffffffff80210041>] ? cpu_idle+0x41/0x70
Code: fa d0 00 00 00 48 8d bc 07 88 00 00 00 e8 b9 dd f7 ff 8b 7c 24 54 e8 90 fb
fb ff ff 44 24 24 e9 3b fd ff ff 0f 0b eb fe 66 66 90 <0f> 0b eb fe 48 8b
7c 24
30 48 8b 54 24 30 b9 0b 00 00 00 48 c7
RIP [<ffffffff804077c0>] do_blkif_request+0x2f0/0x380
RSP <ffffffff80865dd8>
Kernel panic - not syncing: Fatal exception in interrupt
kernel BUG at drivers/block/xen-blkfront.c:243!
invalid opcode: 0000 [#1] SMP
last sysfs file: /sys/block/xvda/dev
CPU 0
Modules linked in:
Pid: 0, comm: swapper Not tainted 2.6.28-metacarta-appliance-1 #2
RIP: e030:[<ffffffff804077c0>] [<ffffffff804077c0>]
do_blkif_request+0x2f0/0x380
RSP: e02b:ffffffff80865dd8 EFLAGS: 00010046
RAX: 0000000000000000 RBX: ffff880366f2a9c0 RCX: ffff880366f2a9c0
RDX: ffff880366f233b0 RSI: 000000000000000a RDI: 0000000000000168
RBP: ffff88039d895cf0 R08: 0000000000000b40 R09: ffff88038fb029e0
R10: 000000000000000f R11: 000000000000001a R12: 0000000000000168
R13: 0000000000000001 R14: ffff880366f233c0 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffffffff807a1980(0000) knlGS:0000000000000000
CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00000000f7f9c444 CR3: 000000039e7ea000 CR4: 0000000000002620
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process swapper (pid: 0, threadinfo ffffffff807a6000, task ffffffff806f0360)
Stack:
000000000000004c ffff88038fb029e0 ffff88039a5d8000 ffff880366f11938
0000000980298eec 000000000000001d ffff88039a5d8000 000004008036f3da
ffff880366f2a9c0 ffff88038fb02a00 ffffffff00000001 ffff88038fb029e0
Call Trace:
<IRQ> <0> [<ffffffff8036fa45>] ?
blk_invoke_request_fn+0xa5/0x110
[<ffffffff80407868>] ? kick_pending_request_queues+0x18/0x30
[<ffffffff80407a17>] ? blkif_interrupt+0x197/0x1e0
[<ffffffff8026cc59>] ? handle_IRQ_event+0x39/0x80
[<ffffffff8026f016>] ? handle_level_irq+0x96/0x120
[<ffffffff802140d5>] ? do_IRQ+0x85/0x110
[<ffffffff803c8315>] ? xen_evtchn_do_upcall+0xe5/0x130
[<ffffffff802461f7>] ? __do_softirq+0xe7/0x180
[<ffffffff8059f3ee>] ? xen_do_hypervisor_callback+0x1e/0x30
<EOI> <0> [<ffffffff802093aa>] ? _stext+0x3aa/0x1000
[<ffffffff802093aa>] ? _stext+0x3aa/0x1000
[<ffffffff8020de8c>] ? xen_safe_halt+0xc/0x20
[<ffffffff8020c1fa>] ? xen_idle+0x2a/0x50
[<ffffffff80210041>] ? cpu_idle+0x41/0x70
Code: fa d0 00 00 00 48 8d bc 07 88 00 00 00 e8 b9 dd f7 ff 8b 7c 24 54 e8 90 fb
fb ff ff 44 24 24 e9 3b fd ff ff 0f 0b eb fe 66 66 90 <0f> 0b eb
fe 48 8b 7c 24 30 48 8b 54 24 30 b9 0b 00 00 00 48 c7
RIP [<ffffffff804077c0>] do_blkif_request+0x2f0/0x380
RSP <ffffffff80865dd8>
Kernel panic - not syncing: Fatal exception in interrupt
We''ve encountered the a similar panic using Xen 3.2.1
(debian-backports, 2.6.18-6-xen-amd64 kernel) and Xen 3.2.0 (Ubuntu Hardy,
2.6.24-23-xen kernel) running in para-virtual mode. The source around the line
referenced in the panic is:
rq_for_each_segment(bvec, req, iter) {
BUG_ON(ring_req->nr_segments ==
BLKIF_MAX_SEGMENTS_PER_REQUEST);
...
handle the segment
...
ring_req->nr_segments++;
}
I''m able to reliably reproduce this panic through a certain workload
(usually through creating file-systems) if anyone would like me to do further
debugging.
Thanks,
---
Greg Harris
System Administrator
MetaCarta, Inc.
(O) +1 (617) 301-5530
(M) +1 (781) 258-4474
---
Greg Harris
System Administrator
MetaCarta, Inc.
(O) +1 (617) 301-5530
(M) +1 (781) 258-4474
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-Feb-02 06:19 UTC
Re: [Xen-devel] Kernel Panic in xen-blkfront.c:blkif_queue_request under 2.6.28
Greg Harris wrote:> Hi, > > I''ve run into several panics around the Xen block frontend driver in 2.6.28. It appears that when the kernel issues block queue requests in blkif_queue_request the number of segments exceeds BLKIF_MAX_SEGMENTS_PER_REQUEST triggering the panic despite the call to set the maximum number of segments during the queue initialization (xlvdb_init_blk_queue calls blk_queue_max_phys_segments and blk_queue_max_hw_segments with BLKIF_MAX_SEGMENTS_PER_REQUEST as parameters). >I''ve got a few reports if this, but I haven''t managed to reproduce it myself. But from this description it sounds like the problem is in the upper layers presenting too many segments, rather than a bug in the block driver itself. Jens, does that sound likely? Thanks, J> Attached are two panics: > > kernel BUG at drivers/block/xen-blkfront.c:243! > invalid opcode: 0000 [#1] SMP > last sysfs file: /sys/block/xvda/dev > CPU 0 > Modules linked in: > Pid: 0, comm: swapper Not tainted 2.6.28-metacarta-appliance-1 #2 > RIP: e030:[<ffffffff804077c0>] [<ffffffff804077c0>] > do_blkif_request+0x2f0/0x380 > RSP: e02b:ffffffff80865dd8 EFLAGS: 00010046 > RAX: 0000000000000000 RBX: ffff880366ee33c0 RCX: ffff880366ee33c0 > RDX: ffff880366f15d90 RSI: 000000000000000a RDI: 0000000000000303 > RBP: ffff88039d78b190 R08: 0000000000001818 R09: ffff88038fb7a9e0 > R10: 0000000000000004 R11: 000000000000001a R12: 0000000000000303 > R13: 0000000000000001 R14: ffff880366f15da0 R15: 0000000000000000 > FS: 0000000000000000(0000) GS:ffffffff807a1980(0000) knlGS:0000000000000000 > CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b > CR2: 00000000f7f54444 CR3: 00000003977e5000 CR4: 0000000000002620 > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 > Process swapper (pid: 0, threadinfo ffffffff807a6000, task ffffffff806f0360) > Stack: > 000000000000004c ffff88038fb7a9e0 ffff88039a5f4000 ffff880366f123b8 > 0000000680298eec 000000000000000f ffff88039a5f4000 0000000066edc808 > ffff880366ee33c0 ffff88038fb7aa00 ffffffff00000001 ffff88038fb7a9e0 > Call Trace: > <IRQ> <0> [<ffffffff8036fa45>] ? blk_invoke_request_fn+0xa5/0x110 > [<ffffffff80407868>] ? kick_pending_request_queues+0x18/0x30 > [<ffffffff80407a17>] ? blkif_interrupt+0x197/0x1e0 > [<ffffffff8026cc59>] ? handle_IRQ_event+0x39/0x80 > [<ffffffff8026f016>] ? handle_level_irq+0x96/0x120 > [<ffffffff802140d5>] ? do_IRQ+0x85/0x110 > [<ffffffff803c8315>] ? xen_evtchn_do_upcall+0xe5/0x130 > [<ffffffff802461f7>] ? __do_softirq+0xe7/0x180 > [<ffffffff8059f3ee>] ? xen_do_hypervisor_callback+0x1e/0x30 > <EOI> <0> [<ffffffff802093aa>] ? _stext+0x3aa/0x1000 > [<ffffffff802093aa>] ? _stext+0x3aa/0x1000 > [<ffffffff8020de8c>] ? xen_safe_halt+0xc/0x20 > [<ffffffff8020c1fa>] ? xen_idle+0x2a/0x50 > [<ffffffff80210041>] ? cpu_idle+0x41/0x70 > Code: fa d0 00 00 00 48 8d bc 07 88 00 00 00 e8 b9 dd f7 ff 8b 7c 24 54 e8 90 fb > fb ff ff 44 24 24 e9 3b fd ff ff 0f 0b eb fe 66 66 90 <0f> 0b eb fe 48 8b 7c 24 > 30 48 8b 54 24 30 b9 0b 00 00 00 48 c7 > RIP [<ffffffff804077c0>] do_blkif_request+0x2f0/0x380 > RSP <ffffffff80865dd8> > Kernel panic - not syncing: Fatal exception in interrupt > > kernel BUG at drivers/block/xen-blkfront.c:243! > invalid opcode: 0000 [#1] SMP > last sysfs file: /sys/block/xvda/dev > CPU 0 > Modules linked in: > Pid: 0, comm: swapper Not tainted 2.6.28-metacarta-appliance-1 #2 > RIP: e030:[<ffffffff804077c0>] [<ffffffff804077c0>] > do_blkif_request+0x2f0/0x380 > RSP: e02b:ffffffff80865dd8 EFLAGS: 00010046 > RAX: 0000000000000000 RBX: ffff880366f2a9c0 RCX: ffff880366f2a9c0 > RDX: ffff880366f233b0 RSI: 000000000000000a RDI: 0000000000000168 > RBP: ffff88039d895cf0 R08: 0000000000000b40 R09: ffff88038fb029e0 > R10: 000000000000000f R11: 000000000000001a R12: 0000000000000168 > R13: 0000000000000001 R14: ffff880366f233c0 R15: 0000000000000000 > FS: 0000000000000000(0000) GS:ffffffff807a1980(0000) knlGS:0000000000000000 > CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b > CR2: 00000000f7f9c444 CR3: 000000039e7ea000 CR4: 0000000000002620 > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 > Process swapper (pid: 0, threadinfo ffffffff807a6000, task ffffffff806f0360) > Stack: > 000000000000004c ffff88038fb029e0 ffff88039a5d8000 ffff880366f11938 > 0000000980298eec 000000000000001d ffff88039a5d8000 000004008036f3da > ffff880366f2a9c0 ffff88038fb02a00 ffffffff00000001 ffff88038fb029e0 > Call Trace: > <IRQ> <0> [<ffffffff8036fa45>] ? blk_invoke_request_fn+0xa5/0x110 > [<ffffffff80407868>] ? kick_pending_request_queues+0x18/0x30 > [<ffffffff80407a17>] ? blkif_interrupt+0x197/0x1e0 > [<ffffffff8026cc59>] ? handle_IRQ_event+0x39/0x80 > [<ffffffff8026f016>] ? handle_level_irq+0x96/0x120 > [<ffffffff802140d5>] ? do_IRQ+0x85/0x110 > [<ffffffff803c8315>] ? xen_evtchn_do_upcall+0xe5/0x130 > [<ffffffff802461f7>] ? __do_softirq+0xe7/0x180 > [<ffffffff8059f3ee>] ? xen_do_hypervisor_callback+0x1e/0x30 > <EOI> <0> [<ffffffff802093aa>] ? _stext+0x3aa/0x1000 > [<ffffffff802093aa>] ? _stext+0x3aa/0x1000 > [<ffffffff8020de8c>] ? xen_safe_halt+0xc/0x20 > [<ffffffff8020c1fa>] ? xen_idle+0x2a/0x50 > [<ffffffff80210041>] ? cpu_idle+0x41/0x70 > Code: fa d0 00 00 00 48 8d bc 07 88 00 00 00 e8 b9 dd f7 ff 8b 7c 24 54 e8 90 fb > fb ff ff 44 24 24 e9 3b fd ff ff 0f 0b eb fe 66 66 90 <0f> 0b eb > fe 48 8b 7c 24 30 48 8b 54 24 30 b9 0b 00 00 00 48 c7 > RIP [<ffffffff804077c0>] do_blkif_request+0x2f0/0x380 > RSP <ffffffff80865dd8> > Kernel panic - not syncing: Fatal exception in interrupt > > We''ve encountered the a similar panic using Xen 3.2.1 (debian-backports, 2.6.18-6-xen-amd64 kernel) and Xen 3.2.0 (Ubuntu Hardy, 2.6.24-23-xen kernel) running in para-virtual mode. The source around the line referenced in the panic is: > > rq_for_each_segment(bvec, req, iter) { > BUG_ON(ring_req->nr_segments == BLKIF_MAX_SEGMENTS_PER_REQUEST); > ... > handle the segment > ... > ring_req->nr_segments++; > } > > I''m able to reliably reproduce this panic through a certain workload (usually through creating file-systems) if anyone would like me to do further debugging. > > Thanks, > --- > > Greg Harris > System Administrator > MetaCarta, Inc. > > (O) +1 (617) 301-5530 > (M) +1 (781) 258-4474 > > > --- > > Greg Harris > System Administrator > MetaCarta, Inc. > > (O) +1 (617) 301-5530 > (M) +1 (781) 258-4474 > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Greg Harris
2009-Feb-02 14:11 UTC
Re: [Xen-devel] Kernel Panic in xen-blkfront.c:blkif_queue_request under 2.6.28
----- "Jens Axboe" <jens.axboe@oracle.com> wrote:> Hmm, xen-blkfront.c does: > > BUG_ON(ring_req->nr_segments == BLKIF_MAX_SEGMENTS_PER_REQUEST); > > with a limit setting of > > blk_queue_max_phys_segments(rq, BLKIF_MAX_SEGMENTS_PER_REQUEST); > blk_queue_max_hw_segments(rq, BLKIF_MAX_SEGMENTS_PER_REQUEST); > > So the BUG_ON(), as it stands, can indeed very well trigger, since > you > asked for that limit. > > Either that should be > > BUG_ON(ring_req->nr_segments > BLKIF_MAX_SEGMENTS_PER_REQUEST);The nr_segments is being used as an offset into an array of size BLKIF_MAX_SEGMENTS_PER_REQUEST so by the time it is equal to BLKIF_MAX_SEGMENTS_PER_REQUEST (when the BUG_ON fires) it is already poised to write outside of the array as allocated.>From include/xen/interface/io/blkif.h:struct blkif_request { ... struct blkif_request_segment { grant_ref_t gref; /* reference to I/O buffer frame */ /* @first_sect: first sector in frame to transfer (inclusive). */ /* @last_sect: last sector in frame to transfer (inclusive). */ uint8_t first_sect, last_sect; } seg[BLKIF_MAX_SEGMENTS_PER_REQUEST]; }> > or the limit should be BLKIF_MAX_SEGMENTS_PER_REQUEST - 1.According to Documentation/block/biodoc.txt the calls to blk_queue_max_*_segment are setting the maximum number of segments the driver can hold which according to my read of the data structure above is BLKIF_MAX_SEGMENTS_PER_REQUEST. I will attempt compiling another kernel setting the max segments to BLKIF_MAX_SEGMENTS_PER_REQUEST -1 to see if there is any effect. It sounds to me like the kernel itself may not be obeying the requested segment limits here? Thanks, -- Greg> > > > > Thanks, > > J > > > > >Attached are two panics: > > > > > >kernel BUG at drivers/block/xen-blkfront.c:243! > > >invalid opcode: 0000 [#1] SMP > > >last sysfs file: /sys/block/xvda/dev > > >CPU 0 > > >Modules linked in: > > >Pid: 0, comm: swapper Not tainted 2.6.28-metacarta-appliance-1 #2 > > >RIP: e030:[<ffffffff804077c0>] [<ffffffff804077c0>] > > >do_blkif_request+0x2f0/0x380 > > >RSP: e02b:ffffffff80865dd8 EFLAGS: 00010046 > > >RAX: 0000000000000000 RBX: ffff880366ee33c0 RCX: ffff880366ee33c0 > > >RDX: ffff880366f15d90 RSI: 000000000000000a RDI: 0000000000000303 > > >RBP: ffff88039d78b190 R08: 0000000000001818 R09: ffff88038fb7a9e0 > > >R10: 0000000000000004 R11: 000000000000001a R12: 0000000000000303 > > >R13: 0000000000000001 R14: ffff880366f15da0 R15: 0000000000000000 > > >FS: 0000000000000000(0000) GS:ffffffff807a1980(0000) > > >knlGS:0000000000000000 > > >CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b > > >CR2: 00000000f7f54444 CR3: 00000003977e5000 CR4: 0000000000002620 > > >DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > > >DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 > > >Process swapper (pid: 0, threadinfo ffffffff807a6000, task > > >ffffffff806f0360) > > >Stack: > > > 000000000000004c ffff88038fb7a9e0 ffff88039a5f4000 > ffff880366f123b8 > > > 0000000680298eec 000000000000000f ffff88039a5f4000 > 0000000066edc808 > > > ffff880366ee33c0 ffff88038fb7aa00 ffffffff00000001 > ffff88038fb7a9e0 > > >Call Trace: > > > <IRQ> <0> [<ffffffff8036fa45>] ? blk_invoke_request_fn+0xa5/0x110 > > > [<ffffffff80407868>] ? kick_pending_request_queues+0x18/0x30 > > > [<ffffffff80407a17>] ? blkif_interrupt+0x197/0x1e0 > > > [<ffffffff8026cc59>] ? handle_IRQ_event+0x39/0x80 > > > [<ffffffff8026f016>] ? handle_level_irq+0x96/0x120 > > > [<ffffffff802140d5>] ? do_IRQ+0x85/0x110 > > > [<ffffffff803c8315>] ? xen_evtchn_do_upcall+0xe5/0x130 > > > [<ffffffff802461f7>] ? __do_softirq+0xe7/0x180 > > > [<ffffffff8059f3ee>] ? xen_do_hypervisor_callback+0x1e/0x30 > > > <EOI> <0> [<ffffffff802093aa>] ? _stext+0x3aa/0x1000 > > > [<ffffffff802093aa>] ? _stext+0x3aa/0x1000 > > > [<ffffffff8020de8c>] ? xen_safe_halt+0xc/0x20 > > > [<ffffffff8020c1fa>] ? xen_idle+0x2a/0x50 > > > [<ffffffff80210041>] ? cpu_idle+0x41/0x70 > > >Code: fa d0 00 00 00 48 8d bc 07 88 00 00 00 e8 b9 dd f7 ff 8b 7c > 24 54 e8 > > >90 fb > > >fb ff ff 44 24 24 e9 3b fd ff ff 0f 0b eb fe 66 66 90 <0f> 0b eb fe > 48 8b > > >7c 24 > > >30 48 8b 54 24 30 b9 0b 00 00 00 48 c7 > > >RIP [<ffffffff804077c0>] do_blkif_request+0x2f0/0x380 > > > RSP <ffffffff80865dd8> > > >Kernel panic - not syncing: Fatal exception in interrupt > > > > > >kernel BUG at drivers/block/xen-blkfront.c:243! > > >invalid opcode: 0000 [#1] SMP > > >last sysfs file: /sys/block/xvda/dev > > >CPU 0 > > >Modules linked in: > > >Pid: 0, comm: swapper Not tainted 2.6.28-metacarta-appliance-1 #2 > > >RIP: e030:[<ffffffff804077c0>] [<ffffffff804077c0>] > > >do_blkif_request+0x2f0/0x380 > > >RSP: e02b:ffffffff80865dd8 EFLAGS: 00010046 > > >RAX: 0000000000000000 RBX: ffff880366f2a9c0 RCX: ffff880366f2a9c0 > > >RDX: ffff880366f233b0 RSI: 000000000000000a RDI: 0000000000000168 > > >RBP: ffff88039d895cf0 R08: 0000000000000b40 R09: ffff88038fb029e0 > > >R10: 000000000000000f R11: 000000000000001a R12: 0000000000000168 > > >R13: 0000000000000001 R14: ffff880366f233c0 R15: 0000000000000000 > > >FS: 0000000000000000(0000) GS:ffffffff807a1980(0000) > > >knlGS:0000000000000000 > > >CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b > > >CR2: 00000000f7f9c444 CR3: 000000039e7ea000 CR4: 0000000000002620 > > >DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > > >DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 > > >Process swapper (pid: 0, threadinfo ffffffff807a6000, task > > >ffffffff806f0360) > > >Stack: > > > 000000000000004c ffff88038fb029e0 ffff88039a5d8000 > ffff880366f11938 > > > 0000000980298eec 000000000000001d ffff88039a5d8000 > 000004008036f3da > > > ffff880366f2a9c0 ffff88038fb02a00 ffffffff00000001 > ffff88038fb029e0 > > >Call Trace: > > > <IRQ> <0> [<ffffffff8036fa45>] ? blk_invoke_request_fn+0xa5/0x110 > > > [<ffffffff80407868>] ? kick_pending_request_queues+0x18/0x30 > > > [<ffffffff80407a17>] ? blkif_interrupt+0x197/0x1e0 > > > [<ffffffff8026cc59>] ? handle_IRQ_event+0x39/0x80 > > > [<ffffffff8026f016>] ? handle_level_irq+0x96/0x120 > > > [<ffffffff802140d5>] ? do_IRQ+0x85/0x110 > > > [<ffffffff803c8315>] ? xen_evtchn_do_upcall+0xe5/0x130 > > > [<ffffffff802461f7>] ? __do_softirq+0xe7/0x180 > > > [<ffffffff8059f3ee>] ? xen_do_hypervisor_callback+0x1e/0x30 > > > <EOI> <0> [<ffffffff802093aa>] ? _stext+0x3aa/0x1000 > > > [<ffffffff802093aa>] ? _stext+0x3aa/0x1000 > > > [<ffffffff8020de8c>] ? xen_safe_halt+0xc/0x20 > > > [<ffffffff8020c1fa>] ? xen_idle+0x2a/0x50 > > > [<ffffffff80210041>] ? cpu_idle+0x41/0x70 > > >Code: fa d0 00 00 00 48 8d bc 07 88 00 00 00 e8 b9 dd f7 ff 8b 7c > 24 54 e8 > > >90 fb > > >fb ff ff 44 24 24 e9 3b fd ff ff 0f 0b eb fe 66 66 90 <0f> 0b eb > > >fe 48 8b 7c 24 30 48 8b 54 24 30 b9 0b 00 00 00 48 c7 > > >RIP [<ffffffff804077c0>] do_blkif_request+0x2f0/0x380 > > > RSP <ffffffff80865dd8> > > >Kernel panic - not syncing: Fatal exception in interrupt > > > > > >We''ve encountered the a similar panic using Xen 3.2.1 > (debian-backports, > > >2.6.18-6-xen-amd64 kernel) and Xen 3.2.0 (Ubuntu Hardy, > 2.6.24-23-xen > > >kernel) running in para-virtual mode. The source around the line > > >referenced in the panic is: > > > > > >rq_for_each_segment(bvec, req, iter) { > > > BUG_ON(ring_req->nr_segments == > > > BLKIF_MAX_SEGMENTS_PER_REQUEST); > > > ... > > > handle the segment > > > ... > > > ring_req->nr_segments++; > > >} > > > > > >I''m able to reliably reproduce this panic through a certain > workload > > >(usually through creating file-systems) if anyone would like me to > do > > >further debugging. > > > > > >Thanks, > > >--- > > > > > >Greg Harris > > >System Administrator > > >MetaCarta, Inc. > > > > > >(O) +1 (617) 301-5530 > > >(M) +1 (781) 258-4474 > > > > > > > > >--- > > > > > >Greg Harris > > >System Administrator > > >MetaCarta, Inc. > > > > > >(O) +1 (617) 301-5530 > > >(M) +1 (781) 258-4474 > > > > > >_______________________________________________ > > >Xen-devel mailing list > > >Xen-devel@lists.xensource.com > > >http://lists.xensource.com/xen-devel > > > > > > > -- > Jens Axboe_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Greg Harris
2009-Feb-02 14:53 UTC
Re: [Xen-devel] Kernel Panic in xen-blkfront.c:blkif_queue_request under 2.6.28
----- "Jens Axboe" <jens.axboe@oracle.com> wrote:
Here is what I''m thinking is happening rewritten for clarity:
#define BLKIF_MAX_SEGMENTS_PER_REQUEST 11
int array[BLKIF_MAX_SEGMENTS_PER_REQUEST];
void write_segments( int number_of_segments )
int nr_segments = 0;
for( int x = 0; x < number_of_segments; x++ )
{
BUG_ON( nr_segments == BLKIF_MAX_SEGMENTS_PER_REQUEST );
array[nr_segments] = get_segment_value(nr_segments);
nr_segments ++ ;
}
}
The BUG_ON is firing because the index into the number of segments is equal to
BLKIF_MAX_SEGMENTS_PER_REQUEST which would require an array size of
BLKIF_MAX_SEGMENTS_PER_REQUEST + 1 (more than has actually been allocated).
The kernel is being told that it should happily map up to
BLKIF_MAX_SEGMENTS_PER_REQUEST segments which will fit in our array as
allocated. The BUG_ON is correctly firing because in the iteration over the
number of segments our index has been incremented to a value that now points
outside the boundary of our array.
-- Greg
>
> > It sounds to me like the kernel itself may not be obeying the
> > requested segment limits here?
>
> It''s quite simple - if you tell the kernel that your segment limit
is
> 8,
> then it will happily map up to 8 segments for you. So the mixture of
> setting a limit to foo and check calling BUG() if that limit is
> reached
> is crap, of obvious reasons. If you ask for 8 segments but can only
> hold
> 7, well...
> --
> Jens Axboe
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-Feb-02 23:30 UTC
Re: [Xen-devel] Kernel Panic in xen-blkfront.c:blkif_queue_request under 2.6.28
Jens Axboe wrote:> To shed some more light on this, I''d suggest changing that BUG_ON() to > some code that simply dumps each segment (each bvec in the iterator > list) from start to finish along with values of > request->nr_phys_segments and size info. >OK, something like this? J Subject: xen/blkfront: try to track down over-segment BUG_ON in blkfront Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> --- drivers/block/xen-blkfront.c | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) ==================================================================--- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -240,7 +240,10 @@ ring_req->nr_segments = 0; rq_for_each_segment(bvec, req, iter) { - BUG_ON(ring_req->nr_segments == BLKIF_MAX_SEGMENTS_PER_REQUEST); + if (WARN_ON(ring_req->nr_segments >+ BLKIF_MAX_SEGMENTS_PER_REQUEST)) + goto dump_req; + buffer_mfn = pfn_to_mfn(page_to_pfn(bvec->bv_page)); fsect = bvec->bv_offset >> 9; lsect = fsect + (bvec->bv_len >> 9) - 1; @@ -274,6 +277,25 @@ gnttab_free_grant_references(gref_head); return 0; + +dump_req: + { + int i; + + printk(KERN_DEBUG "too many segments for ring (%d): " + "req->nr_phys_segments = %d\n", + BLKIF_MAX_SEGMENTS_PER_REQUEST, req->nr_phys_segments); + + i = 0; + rq_for_each_segment(bvec, req, iter) { + printk(KERN_DEBUG + " %d: bio page %p pfn %lx off %u len %u\n", + i++, bvec->bv_page, page_to_pfn(bvec->bv_page), + bvec->bv_offset, bvec->bv_len); + } + } + + return 1; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Greg Harris
2009-Feb-03 20:37 UTC
Re: [Xen-devel] Kernel Panic in xen-blkfront.c:blkif_queue_request under 2.6.28
After applying the patch we were able to reproduce the panic and the additional debugging output is attached. The driver appears to re-try the request several times before dying: Writing inode tables: ------------[ cut here ]------------ WARNING: at drivers/block/xen-blkfront.c:244 do_blkif_request+0x301/0x440() Modules linked in: Pid: 0, comm: swapper Not tainted 2.6.28.2-metacarta-appliance-1 #2 Call Trace: <IRQ> [<ffffffff80240b34>] warn_on_slowpath+0x64/0xa0 [<ffffffff80232ae3>] enqueue_task+0x13/0x30 [<ffffffff8059be54>] _spin_unlock_irqrestore+0x14/0x20 [<ffffffff803c70fc>] get_free_entries+0xbc/0x2a0 [<ffffffff804078b1>] do_blkif_request+0x301/0x440 [<ffffffff8036fb35>] blk_invoke_request_fn+0xa5/0x110 [<ffffffff80407a08>] kick_pending_request_queues+0x18/0x30 [<ffffffff80407bb7>] blkif_interrupt+0x197/0x1e0 [<ffffffff8026ccd9>] handle_IRQ_event+0x39/0x80 [<ffffffff8026f096>] handle_level_irq+0x96/0x120 [<ffffffff802140d5>] do_IRQ+0x85/0x110 [<ffffffff803c83f5>] xen_evtchn_do_upcall+0xe5/0x130 [<ffffffff80246217>] __do_softirq+0xe7/0x180 [<ffffffff8059c65e>] xen_do_hypervisor_callback+0x1e/0x30 <EOI> [<ffffffff802093aa>] _stext+0x3aa/0x1000 [<ffffffff802093aa>] _stext+0x3aa/0x1000 [<ffffffff8020de8c>] xen_safe_halt+0xc/0x20 [<ffffffff8020c1fa>] xen_idle+0x2a/0x50 [<ffffffff80210041>] cpu_idle+0x41/0x70 ---[ end trace 107c74ebf2b50a63 ]--- METACARTA: too many segments for ring (11): req->nr_phys_segments = 11 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 1536 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 2048 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 2560 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 3072 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 3584 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 0 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 512 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 1024 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 1536 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 2048 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 2560 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 3072 len 512 ------------[ cut here ]------------ WARNING: at drivers/block/xen-blkfront.c:244 do_blkif_request+0x301/0x440() Modules linked in: Pid: 0, comm: swapper Tainted: G W 2.6.28.2-metacarta-appliance-1 #2 Call Trace: <IRQ> [<ffffffff80240b34>] warn_on_slowpath+0x64/0xa0 [<ffffffff8059be54>] _spin_unlock_irqrestore+0x14/0x20 [<ffffffff803c70fc>] get_free_entries+0xbc/0x2a0 [<ffffffff804078b1>] do_blkif_request+0x301/0x440 [<ffffffff80407bb7>] blkif_interrupt+0x197/0x1e0 [<ffffffff8026ccd9>] handle_IRQ_event+0x39/0x80 [<ffffffff8026f096>] handle_level_irq+0x96/0x120 [<ffffffff802140d5>] do_IRQ+0x85/0x110 [<ffffffff803c83f5>] xen_evtchn_do_upcall+0xe5/0x130 [<ffffffff80246217>] __do_softirq+0xe7/0x180 [<ffffffff8059c65e>] xen_do_hypervisor_callback+0x1e/0x30 <EOI> [<ffffffff802093aa>] _stext+0x3aa/0x1000 [<ffffffff802093aa>] _stext+0x3aa/0x1000 [<ffffffff8020de8c>] xen_safe_halt+0xc/0x20 [<ffffffff8020c1fa>] xen_idle+0x2a/0x50 [<ffffffff80210041>] cpu_idle+0x41/0x70 ---[ end trace 107c74ebf2b50a63 ]--- METACARTA: too many segments for ring (11): req->nr_phys_segments = 11 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 1536 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 2048 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 2560 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 3072 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 3584 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 0 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 512 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 1024 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 1536 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 2048 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 2560 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 3072 len 512 ------------[ cut here ]------------ WARNING: at drivers/block/xen-blkfront.c:244 do_blkif_request+0x301/0x440() Modules linked in: Pid: 0, comm: swapper Tainted: G W 2.6.28.2-metacarta-appliance-1 #2 Call Trace: <IRQ> [<ffffffff80240b34>] warn_on_slowpath+0x64/0xa0 [<ffffffff80232ae3>] enqueue_task+0x13/0x30 [<ffffffff8059be54>] _spin_unlock_irqrestore+0x14/0x20 [<ffffffff803c70fc>] get_free_entries+0xbc/0x2a0 [<ffffffff804078b1>] do_blkif_request+0x301/0x440 [<ffffffff8036fb35>] blk_invoke_request_fn+0xa5/0x110 [<ffffffff80407a08>] kick_pending_request_queues+0x18/0x30 [<ffffffff80407bb7>] blkif_interrupt+0x197/0x1e0 [<ffffffff8026ccd9>] handle_IRQ_event+0x39/0x80 [<ffffffff8026f096>] handle_level_irq+0x96/0x120 [<ffffffff802140d5>] do_IRQ+0x85/0x110 [<ffffffff803c83f5>] xen_evtchn_do_upcall+0xe5/0x130 [<ffffffff80246217>] __do_softirq+0xe7/0x180 [<ffffffff8059c65e>] xen_do_hypervisor_callback+0x1e/0x30 <EOI> [<ffffffff802093aa>] _stext+0x3aa/0x1000 [<ffffffff802093aa>] _stext+0x3aa/0x1000 [<ffffffff8020de8c>] xen_safe_halt+0xc/0x20 [<ffffffff8020c1fa>] xen_idle+0x2a/0x50 [<ffffffff80210041>] cpu_idle+0x41/0x70 ---[ end trace 107c74ebf2b50a63 ]--- METACARTA: too many segments for ring (11): req->nr_phys_segments = 11 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 1536 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 2048 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 2560 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 3072 len 512 METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 3584 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 0 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 512 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 1024 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 1536 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 2048 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 2560 len 512 METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 3072 len 512 ------------[ cut here ]------------ WARNING: at drivers/block/xen-blkfront.c:244 do_blkif_request+0x301/0x440() Modules linked in: Pid: 0, comm: swapper Tainted: G W 2.6.28.2-metacarta-appliance-1 #2 Call Trace: <IRQ> [<ffffffff80240b34>] warn_on_slowpath+0x64/0xa0 [<ffffffff8059be54>] _spin_unlock_irqrestore+0x14/0x20 [<ffffffff803c70fc>] get_free_entries+0xbc/0x2a0 [<ffffffff804078b1>] do_blkif_request+0x301/0x440 [<ffffffff80407bb7>] blkif_interrupt+0x197/0x1e0 [<ffffffff8026ccd9>] handle_IRQ_event+0x39/0x80 [<ffffffff8026f096>] handle_level_irq+0x96/0x120 [<ffffffff802140d5>] do_IRQ+0x85/0x110 [<ffffffff803c83f5>] xen_evtchn_do_upcall+0xe5/0x13 We also attempted changing the blk_queue_max_*_segments calls to use BLKIF_MAX_SEGMENTS_PER_REQUEST - 1 and our spinner was able to run overnight without any panics... --- Greg Harris System Administrator MetaCarta, Inc. (O) +1 (617) 301-5530 (M) +1 (781) 258-4474 ----- "Jeremy Fitzhardinge" <jeremy@goop.org> wrote:> Jens Axboe wrote: > > To shed some more light on this, I''d suggest changing that BUG_ON() > to > > some code that simply dumps each segment (each bvec in the iterator > > list) from start to finish along with values of > > request->nr_phys_segments and size info. > > > > OK, something like this? > > J > > Subject: xen/blkfront: try to track down over-segment BUG_ON in > blkfront > > Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> > --- > drivers/block/xen-blkfront.c | 24 +++++++++++++++++++++++- > 1 file changed, 23 insertions(+), 1 deletion(-) > > ==================================================================> --- a/drivers/block/xen-blkfront.c > +++ b/drivers/block/xen-blkfront.c > @@ -240,7 +240,10 @@ > > ring_req->nr_segments = 0; > rq_for_each_segment(bvec, req, iter) { > - BUG_ON(ring_req->nr_segments == BLKIF_MAX_SEGMENTS_PER_REQUEST); > + if (WARN_ON(ring_req->nr_segments >> + BLKIF_MAX_SEGMENTS_PER_REQUEST)) > + goto dump_req; > + > buffer_mfn = pfn_to_mfn(page_to_pfn(bvec->bv_page)); > fsect = bvec->bv_offset >> 9; > lsect = fsect + (bvec->bv_len >> 9) - 1; > @@ -274,6 +277,25 @@ > gnttab_free_grant_references(gref_head); > > return 0; > + > +dump_req: > + { > + int i; > + > + printk(KERN_DEBUG "too many segments for ring (%d): " > + "req->nr_phys_segments = %d\n", > + BLKIF_MAX_SEGMENTS_PER_REQUEST, req->nr_phys_segments); > + > + i = 0; > + rq_for_each_segment(bvec, req, iter) { > + printk(KERN_DEBUG > + " %d: bio page %p pfn %lx off %u len %u\n", > + i++, bvec->bv_page, page_to_pfn(bvec->bv_page), > + bvec->bv_offset, bvec->bv_len); > + } > + } > + > + return 1; > }_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-Feb-04 16:50 UTC
Re: [Xen-devel] Kernel Panic in xen-blkfront.c:blkif_queue_request under 2.6.28
Greg Harris wrote:> After applying the patch we were able to reproduce the panic and the additional debugging output is attached. The driver appears to re-try the request several times before dying: > > Writing inode tables: ------------[ cut here ]------------ > WARNING: at drivers/block/xen-blkfront.c:244 do_blkif_request+0x301/0x440() > Modules linked in: > Pid: 0, comm: swapper Not tainted 2.6.28.2-metacarta-appliance-1 #2 > Call Trace: > <IRQ> [<ffffffff80240b34>] warn_on_slowpath+0x64/0xa0 > [<ffffffff80232ae3>] enqueue_task+0x13/0x30 > [<ffffffff8059be54>] _spin_unlock_irqrestore+0x14/0x20 > [<ffffffff803c70fc>] get_free_entries+0xbc/0x2a0 > [<ffffffff804078b1>] do_blkif_request+0x301/0x440 > [<ffffffff8036fb35>] blk_invoke_request_fn+0xa5/0x110 > [<ffffffff80407a08>] kick_pending_request_queues+0x18/0x30 > [<ffffffff80407bb7>] blkif_interrupt+0x197/0x1e0 > [<ffffffff8026ccd9>] handle_IRQ_event+0x39/0x80 > [<ffffffff8026f096>] handle_level_irq+0x96/0x120 > [<ffffffff802140d5>] do_IRQ+0x85/0x110 > [<ffffffff803c83f5>] xen_evtchn_do_upcall+0xe5/0x130 > [<ffffffff80246217>] __do_softirq+0xe7/0x180 > [<ffffffff8059c65e>] xen_do_hypervisor_callback+0x1e/0x30 > <EOI> [<ffffffff802093aa>] _stext+0x3aa/0x1000 > [<ffffffff802093aa>] _stext+0x3aa/0x1000 > [<ffffffff8020de8c>] xen_safe_halt+0xc/0x20 > [<ffffffff8020c1fa>] xen_idle+0x2a/0x50 > [<ffffffff80210041>] cpu_idle+0x41/0x70 > ---[ end trace 107c74ebf2b50a63 ]--- > METACARTA: too many segments for ring (11): req->nr_phys_segments = 11 > METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 1536 len 512 > METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 2048 len 512 > METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 2560 len 512 > METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 3072 len 512 > METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 3584 len 512 > METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 0 len 512 > METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 512 len 512 > METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 1024 len 512 > METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 1536 len 512 > METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 2048 len 512 > METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 2560 len 512 > METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 3072 len 512 >(Wonder why the index didn''t increment. Missing ++?) Well, that''s interesting. I count 12 bios there. Are we asking for the wrong thing, or is the block layer giving us too many bios? What''s the distinction between a bio and a segment? Also, why are there so many pieces. Our main restriction is that a transfer can''t cross a page boundary, but we could easily handle this request in two pieces, one for each page. Can we ask the block layer to do that merging? Thanks, J _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel