Displaying 20 results from an estimated 85 matches for "nr_phys_seg".
Did you mean:
nr_phys_segs
2015 Oct 01
0
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...8922b39e508375e7c1.
>> commit 1cf7e9c68fe84248174e998922b39e508375e7c1
>> Author: Jens Axboe <axboe at kernel.dk>
>> Date: Fri Nov 1 10:52:52 2013 -0600
>>
>> virtio_blk: blk-mq support
>>
>>
>> BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
>>
>>
>> On probe, we do
>> /* We can handle whatever the host told us to handle. */
>> blk_queue_max_segments(q, vblk->sg_elems-2);
>>
>>
>> To debug this,
>> maybe you can print out sg_elems...
2007 Jul 09
3
[PATCH 2/3] Virtio draft IV: the block driver
...uct request *req;
> + struct virtblk_req *vbr;
> +
> + while ((req = elv_next_request(q)) != NULL) {
> + vblk = req->rq_disk->private_data;
> +
> + vbr = mempool_alloc(vblk->pool, GFP_ATOMIC);
> + if (!vbr)
> + goto stop;
> +
> + BUG_ON(req->nr_phys_segments > ARRAY_SIZE(vblk->sg));
> + vbr->req = req;
> + if (!do_req(q, vblk, vbr))
> + goto stop;
> + blkdev_dequeue_request(req);
> + }
> +
> +sync:
> + if (vblk)
this check looks bogus, as vblk->pool has been accessed unconditionally above
>...
2007 Jul 09
3
[PATCH 2/3] Virtio draft IV: the block driver
...uct request *req;
> + struct virtblk_req *vbr;
> +
> + while ((req = elv_next_request(q)) != NULL) {
> + vblk = req->rq_disk->private_data;
> +
> + vbr = mempool_alloc(vblk->pool, GFP_ATOMIC);
> + if (!vbr)
> + goto stop;
> +
> + BUG_ON(req->nr_phys_segments > ARRAY_SIZE(vblk->sg));
> + vbr->req = req;
> + if (!do_req(q, vblk, vbr))
> + goto stop;
> + blkdev_dequeue_request(req);
> + }
> +
> +sync:
> + if (vblk)
this check looks bogus, as vblk->pool has been accessed unconditionally above
>...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
Hi,
Mike Snitzer wrote:
> This particular dm-crypt on virtio-blk issue is fixed with this commit:
> http://git.kernel.org/linus/586b286b110e94eb31840ac5afc0c24e0881fe34
>
> Linus pulled this into v4.3-rc3.
I have this patch applied to linux-4.1.9. This could be the reason why I
don't see the issue on boot with linux-4.1.9.
So is the freeze I am experiencing with linux-4.1.9
2015 Oct 01
0
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
Hi,
seems like we have two problems:
The first (origin) problem seems to be already fixed by Mike's patch.
I applied the patch against linux-4.1.8, rebooted several times without
a problem. But I'll keep testing for sure.
The second problem is a new bug within linux-4.1.9: I experience the
> NMI watchdog: BUG: soft lockup - CPU#3 stuck for 23s!
freeze now on all my systems
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
Hi,
Mike Snitzer wrote:
> This particular dm-crypt on virtio-blk issue is fixed with this commit:
> http://git.kernel.org/linus/586b286b110e94eb31840ac5afc0c24e0881fe34
>
> Linus pulled this into v4.3-rc3.
I have this patch applied to linux-4.1.9. This could be the reason why I
don't see the issue on boot with linux-4.1.9.
So is the freeze I am experiencing with linux-4.1.9
2020 Jul 30
0
[PATCH] virtio-blk: fix discard buffer overrun
On 2020/7/30 ??4:30, Jeffle Xu wrote:
> Before commit eded341c085b ("block: don't decrement nr_phys_segments for
> physically contigous segments") applied, the generic block layer may not
> guarantee that @req->nr_phys_segments equals the number of bios in the
> request. When limits. at max_discard_segments == 1 and the IO scheduler is
> set to scheduler except for "none"...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...> So this BUG_ON is from 1cf7e9c68fe84248174e998922b39e508375e7c1.
> commit 1cf7e9c68fe84248174e998922b39e508375e7c1
> Author: Jens Axboe <axboe at kernel.dk>
> Date: Fri Nov 1 10:52:52 2013 -0600
>
> virtio_blk: blk-mq support
>
>
> BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
>
>
> On probe, we do
> /* We can handle whatever the host told us to handle. */
> blk_queue_max_segments(q, vblk->sg_elems-2);
>
>
> To debug this,
> maybe you can print out sg_elems at init time and when this fails,...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...> So this BUG_ON is from 1cf7e9c68fe84248174e998922b39e508375e7c1.
> commit 1cf7e9c68fe84248174e998922b39e508375e7c1
> Author: Jens Axboe <axboe at kernel.dk>
> Date: Fri Nov 1 10:52:52 2013 -0600
>
> virtio_blk: blk-mq support
>
>
> BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
>
>
> On probe, we do
> /* We can handle whatever the host told us to handle. */
> blk_queue_max_segments(q, vblk->sg_elems-2);
>
>
> To debug this,
> maybe you can print out sg_elems at init time and when this fails,...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...gt; So this BUG_ON is from 1cf7e9c68fe84248174e998922b39e508375e7c1.
> commit 1cf7e9c68fe84248174e998922b39e508375e7c1
> Author: Jens Axboe <axboe at kernel.dk>
> Date: Fri Nov 1 10:52:52 2013 -0600
>
> virtio_blk: blk-mq support
>
>
> BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
>
>
> On probe, we do
> /* We can handle whatever the host told us to handle. */
> blk_queue_max_segments(q, vblk->sg_elems-2);
>
>
> To debug this,
> maybe you can print out sg_elems at init time and when this fail...
2015 Oct 01
2
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...gt; So this BUG_ON is from 1cf7e9c68fe84248174e998922b39e508375e7c1.
> commit 1cf7e9c68fe84248174e998922b39e508375e7c1
> Author: Jens Axboe <axboe at kernel.dk>
> Date: Fri Nov 1 10:52:52 2013 -0600
>
> virtio_blk: blk-mq support
>
>
> BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
>
>
> On probe, we do
> /* We can handle whatever the host told us to handle. */
> blk_queue_max_segments(q, vblk->sg_elems-2);
>
>
> To debug this,
> maybe you can print out sg_elems at init time and when this fail...
2014 Nov 10
2
kernel BUG at drivers/block/virtio_blk.c:172!
...the repos, I'm seeing the
>> following oops when mounting xfs. rc2-ish kernels seem to be fine:
>>
>> [ 64.669633] ------------[ cut here ]------------
>> [ 64.670008] kernel BUG at drivers/block/virtio_blk.c:172!
>
> Hmm, that's:
>
> BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
>
> But during our probe routine we said:
>
> /* We can handle whatever the host told us to handle. */
> blk_queue_max_segments(q, vblk->sg_elems-2);
>
> Jens?
Known, I'm afraid, Ming is looking into it.
--
Jens Axboe
2014 Nov 10
2
kernel BUG at drivers/block/virtio_blk.c:172!
...the repos, I'm seeing the
>> following oops when mounting xfs. rc2-ish kernels seem to be fine:
>>
>> [ 64.669633] ------------[ cut here ]------------
>> [ 64.670008] kernel BUG at drivers/block/virtio_blk.c:172!
>
> Hmm, that's:
>
> BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
>
> But during our probe routine we said:
>
> /* We can handle whatever the host told us to handle. */
> blk_queue_max_segments(q, vblk->sg_elems-2);
>
> Jens?
Known, I'm afraid, Ming is looking into it.
--
Jens Axboe
2014 Nov 11
2
kernel BUG at drivers/block/virtio_blk.c:172!
...nts consider also
queue_max_segments()
When recounting the number of physical segments, the number of max
segments of request_queue must be also taken into account.
Otherwise bio->bi_phys_segments could get bigger than
queue_max_segments(). Then this results in virtio_queue_rq() seeing
req->nr_phys_segments that is greater than expected. Although the
initial queue_max_segments was set to (vblk->sg_elems - 2), a request
comes in with a larger value of nr_phys_segments, which triggers the
BUG_ON() condition.
This commit should fix a kernel crash in virtio_blk, which occurs
especially frequently...
2014 Nov 11
2
kernel BUG at drivers/block/virtio_blk.c:172!
...nts consider also
queue_max_segments()
When recounting the number of physical segments, the number of max
segments of request_queue must be also taken into account.
Otherwise bio->bi_phys_segments could get bigger than
queue_max_segments(). Then this results in virtio_queue_rq() seeing
req->nr_phys_segments that is greater than expected. Although the
initial queue_max_segments was set to (vblk->sg_elems - 2), a request
comes in with a larger value of nr_phys_segments, which triggers the
BUG_ON() condition.
This commit should fix a kernel crash in virtio_blk, which occurs
especially frequently...
2012 May 03
2
[PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method
...t_all(vbr->req, error);
+ vblk->req_in_flight--;
mempool_free(vbr, vblk->pool);
}
/* In case queue is stopped waiting for more buffers. */
@@ -190,6 +194,7 @@ static void do_virtblk_request(struct request_queue *q)
while ((req = blk_peek_request(q)) != NULL) {
BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
+ vblk->req_in_flight++;
/* If this request fails, stop queue and wait for something to
finish to restart it. */
@@ -443,7 +448,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
if (err)
goto out_free_vblk;
- vblk->pool =...
2012 May 03
2
[PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method
...t_all(vbr->req, error);
+ vblk->req_in_flight--;
mempool_free(vbr, vblk->pool);
}
/* In case queue is stopped waiting for more buffers. */
@@ -190,6 +194,7 @@ static void do_virtblk_request(struct request_queue *q)
while ((req = blk_peek_request(q)) != NULL) {
BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
+ vblk->req_in_flight++;
/* If this request fails, stop queue and wait for something to
finish to restart it. */
@@ -443,7 +448,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
if (err)
goto out_free_vblk;
- vblk->pool =...
2015 Oct 01
0
req->nr_phys_segments > queue_max_segments (was Re: kernel BUG at drivers/block/virtio_blk.c:172!)
...> ---[ end trace 8078357c459d5fc0 ]---
So this BUG_ON is from 1cf7e9c68fe84248174e998922b39e508375e7c1.
commit 1cf7e9c68fe84248174e998922b39e508375e7c1
Author: Jens Axboe <axboe at kernel.dk>
Date: Fri Nov 1 10:52:52 2013 -0600
virtio_blk: blk-mq support
BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
On probe, we do
/* We can handle whatever the host told us to handle. */
blk_queue_max_segments(q, vblk->sg_elems-2);
To debug this,
maybe you can print out sg_elems at init time and when this fails,
to make sure some kind of memory corruptio...
2014 Nov 11
0
kernel BUG at drivers/block/virtio_blk.c:172!
...segments()
>
> When recounting the number of physical segments, the number of max
> segments of request_queue must be also taken into account.
> Otherwise bio->bi_phys_segments could get bigger than
> queue_max_segments(). Then this results in virtio_queue_rq() seeing
> req->nr_phys_segments that is greater than expected. Although the
> initial queue_max_segments was set to (vblk->sg_elems - 2), a request
> comes in with a larger value of nr_phys_segments, which triggers the
> BUG_ON() condition.
>
> This commit should fix a kernel crash in virtio_blk, which occu...
2018 Jun 07
2
[PATCH v6] virtio_blk: add DISCARD and WRIET ZEROES commands support
...again.
>
> > @@ -225,6 +260,7 @@ static blk_status_t virtio_queue_rq(struct
> blk_mq_hw_ctx *hctx,
> > int qid = hctx->queue_num;
> > int err;
> > bool notify = false;
> > + bool unmap = false;
> > u32 type;
> >
> > BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
> > @@ -237,6 +273,13 @@ static blk_status_t virtio_queue_rq(struct
> blk_mq_hw_ctx *hctx,
> > case REQ_OP_FLUSH:
> > type = VIRTIO_BLK_T_FLUSH;
> > break;
> > + case REQ_OP_DISCARD:
> > + type = VIRTIO_BLK_T_DISCARD;...