Displaying 14 results from an estimated 14 matches for "request_fn".
Did you mean:
request_end
2012 May 25
9
[PATCH 0/3] Fix hot-unplug race in virtio-blk
This patch set fixes the race when hot-unplug stressed disk.
Asias He (3):
virtio-blk: Call del_gendisk() before disable guest kick
virtio-blk: Reset device after blk_cleanup_queue()
virtio-blk: Use block layer provided spinlock
drivers/block/virtio_blk.c | 25 ++++++-------------------
1 file changed, 6 insertions(+), 19 deletions(-)
--
1.7.10.2
2012 May 25
9
[PATCH 0/3] Fix hot-unplug race in virtio-blk
This patch set fixes the race when hot-unplug stressed disk.
Asias He (3):
virtio-blk: Call del_gendisk() before disable guest kick
virtio-blk: Reset device after blk_cleanup_queue()
virtio-blk: Use block layer provided spinlock
drivers/block/virtio_blk.c | 25 ++++++-------------------
1 file changed, 6 insertions(+), 19 deletions(-)
--
1.7.10.2
2012 Jun 11
0
Race condition during hotplug when dropping block queue lock
Block drivers like nbd and rbd unlock struct request_queue->queue_lock in their
request_fn. I'd like to do the same in virtio_blk. After happily posting the
patch, Michael Tsirkin pointed out an issue that I can't explain. This may
affect existing block drivers that unlock the queue_lock too.
What happens when the block device is removed (hot unplug or kernel module
unloaded)...
2012 Jun 11
0
Race condition during hotplug when dropping block queue lock
Block drivers like nbd and rbd unlock struct request_queue->queue_lock in their
request_fn. I'd like to do the same in virtio_blk. After happily posting the
patch, Michael Tsirkin pointed out an issue that I can't explain. This may
affect existing block drivers that unlock the queue_lock too.
What happens when the block device is removed (hot unplug or kernel module
unloaded)...
2014 Jun 10
0
[PATCH 3.4 46/88] virtio-blk: Reset device after blk_cleanup_queue()
...f6892b3fc3726576cb42f17d1d6b5 upstream.
blk_cleanup_queue() will call blk_drian_queue() to drain all the
requests before queue DEAD marking. If we reset the device before
blk_cleanup_queue() the drain would fail.
1) if the queue is stopped in do_virtblk_request() because device is
full, the q->request_fn() will not be called.
blk_drain_queue() {
while(true) {
...
if (!list_empty(&q->queue_head))
__blk_run_queue(q) {
if (queue is not stoped)
q->request_fn()
}
...
}
}
Do no reset the device before blk_cleanup_queue() gives the chance to
start the qu...
2014 Jun 10
0
[PATCH 3.4 46/88] virtio-blk: Reset device after blk_cleanup_queue()
...f6892b3fc3726576cb42f17d1d6b5 upstream.
blk_cleanup_queue() will call blk_drian_queue() to drain all the
requests before queue DEAD marking. If we reset the device before
blk_cleanup_queue() the drain would fail.
1) if the queue is stopped in do_virtblk_request() because device is
full, the q->request_fn() will not be called.
blk_drain_queue() {
while(true) {
...
if (!list_empty(&q->queue_head))
__blk_run_queue(q) {
if (queue is not stoped)
q->request_fn()
}
...
}
}
Do no reset the device before blk_cleanup_queue() gives the chance to
start the qu...
2012 Jun 12
1
[PATCH v2] block: Drop dead function blk_abort_queue()
.../**
- * blk_abort_queue -- Abort all request on given queue
- * @queue: pointer to queue
- *
- */
-void blk_abort_queue(struct request_queue *q)
-{
- unsigned long flags;
- struct request *rq, *tmp;
- LIST_HEAD(list);
-
- /*
- * Not a request based block device, nothing to abort
- */
- if (!q->request_fn)
- return;
-
- spin_lock_irqsave(q->queue_lock, flags);
-
- elv_abort_queue(q);
-
- /*
- * Splice entries to local list, to avoid deadlocking if entries
- * get readded to the timeout list by error handling
- */
- list_splice_init(&q->timeout_list, &list);
-
- list_for_each_entry_...
2012 May 21
6
[RFC PATCH 1/5] block: Introduce q->abort_queue_fn()
...imits - reset limits to default values
* @lim: the queue_limits structure to reset
*
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 2aa2466..e2d58bd 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -200,6 +200,7 @@ struct request_pm_state
typedef void (request_fn_proc) (struct request_queue *q);
typedef void (make_request_fn) (struct request_queue *q, struct bio *bio);
+typedef void (abort_queue_fn) (struct request_queue *q);
typedef int (prep_rq_fn) (struct request_queue *, struct request *);
typedef void (unprep_rq_fn) (struct request_queue *, struct r...
2012 May 21
6
[RFC PATCH 1/5] block: Introduce q->abort_queue_fn()
...imits - reset limits to default values
* @lim: the queue_limits structure to reset
*
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 2aa2466..e2d58bd 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -200,6 +200,7 @@ struct request_pm_state
typedef void (request_fn_proc) (struct request_queue *q);
typedef void (make_request_fn) (struct request_queue *q, struct bio *bio);
+typedef void (abort_queue_fn) (struct request_queue *q);
typedef int (prep_rq_fn) (struct request_queue *, struct request *);
typedef void (unprep_rq_fn) (struct request_queue *, struct r...
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP
guests. If one CPU is doing virtqueue kick and another CPU touches the
vblk->lock it will have to spin until virtqueue kick completes.
This patch reduces system% CPU utilization in SMP guests that are
running multithreaded I/O-bound workloads. The improvements are small
but show as iops and SMP are increased.
Khoa Huynh
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP
guests. If one CPU is doing virtqueue kick and another CPU touches the
vblk->lock it will have to spin until virtqueue kick completes.
This patch reduces system% CPU utilization in SMP guests that are
running multithreaded I/O-bound workloads. The improvements are small
but show as iops and SMP are increased.
Khoa Huynh
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...ue to run
*
* Description:
* See @blk_run_queue. This variant must be called with the queue lock
* held and interrupts disabled.
*/
void __blk_run_queue(struct request_queue *q)
{
+ lockdep_assert_held(q->queue_lock);
+
if (unlikely(blk_queue_stopped(q)))
return;
q->request_fn(q);
}
EXPORT_SYMBOL(__blk_run_queue);
/**
* blk_run_queue_async - run a single device queue in workqueue context
@@ -372,30 +374,30 @@ void blk_drain_queue(struct request_queue *q, bool drain_all)
/*
* This function might be called on a queue which failed
* driver init after queue...
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...ue to run
*
* Description:
* See @blk_run_queue. This variant must be called with the queue lock
* held and interrupts disabled.
*/
void __blk_run_queue(struct request_queue *q)
{
+ lockdep_assert_held(q->queue_lock);
+
if (unlikely(blk_queue_stopped(q)))
return;
q->request_fn(q);
}
EXPORT_SYMBOL(__blk_run_queue);
/**
* blk_run_queue_async - run a single device queue in workqueue context
@@ -372,30 +374,30 @@ void blk_drain_queue(struct request_queue *q, bool drain_all)
/*
* This function might be called on a queue which failed
* driver init after queue...
2007 Jan 02
0
[PATCH 1/4] add scsi-target and IO_CMD_EPOLL_WAIT patches
...ction: scsi_release_buffers()
+ *
+@@ -1687,29 +1691,40 @@ u64 scsi_calculate_bounce_limit(struct S
+ }
+ EXPORT_SYMBOL(scsi_calculate_bounce_limit);
+
+-struct request_queue *scsi_alloc_queue(struct scsi_device *sdev)
++struct request_queue *__scsi_alloc_queue(struct Scsi_Host *shost,
++ request_fn_proc *request_fn)
+ {
+- struct Scsi_Host *shost = sdev->host;
+ struct request_queue *q;
+
+- q = blk_init_queue(scsi_request_fn, NULL);
++ q = blk_init_queue(request_fn, NULL);
+ if (!q)
+ return NULL;
+
+- blk_queue_prep_rq(q, scsi_prep_fn);
+-
+ blk_queue_max_hw_segments(q, shost->...