search for: queue_lock

Displaying 20 results from an estimated 87 matches for "queue_lock".

2012 Jun 11
0
Race condition during hotplug when dropping block queue lock
Block drivers like nbd and rbd unlock struct request_queue->queue_lock in their request_fn. I'd like to do the same in virtio_blk. After happily posting the patch, Michael Tsirkin pointed out an issue that I can't explain. This may affect existing block drivers that unlock the queue_lock too. What happens when the block device is removed (hot unplug or ker...
2012 Jun 11
0
Race condition during hotplug when dropping block queue lock
Block drivers like nbd and rbd unlock struct request_queue->queue_lock in their request_fn. I'd like to do the same in virtio_blk. After happily posting the patch, Michael Tsirkin pointed out an issue that I can't explain. This may affect existing block drivers that unlock the queue_lock too. What happens when the block device is removed (hot unplug or ker...
2017 Dec 06
1
[PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
...equest *async_req, *backlog; > - struct ahash_request *hreq; > - struct ablkcipher_request *breq; > unsigned long flags; > bool was_busy = false; > - int ret, rtype; > + int ret; > + struct crypto_engine_reqctx *enginectx; > > spin_lock_irqsave(&engine->queue_lock, flags); > > @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, > > spin_unlock_irqrestore(&engine->queue_lock, flags); > > - rtype = crypto_tfm_alg_type(engine->cur_req->tfm); > /* Until here we get the request need to...
2017 Dec 06
1
[PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
...equest *async_req, *backlog; > - struct ahash_request *hreq; > - struct ablkcipher_request *breq; > unsigned long flags; > bool was_busy = false; > - int ret, rtype; > + int ret; > + struct crypto_engine_reqctx *enginectx; > > spin_lock_irqsave(&engine->queue_lock, flags); > > @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, > > spin_unlock_irqrestore(&engine->queue_lock, flags); > > - rtype = crypto_tfm_alg_type(engine->cur_req->tfm); > /* Until here we get the request need to...
2017 Dec 07
0
[PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
...ct ahash_request *hreq; > > - struct ablkcipher_request *breq; > > unsigned long flags; > > bool was_busy = false; > > - int ret, rtype; > > + int ret; > > + struct crypto_engine_reqctx *enginectx; > > > > spin_lock_irqsave(&engine->queue_lock, flags); > > > > @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, > > > > spin_unlock_irqrestore(&engine->queue_lock, flags); > > > > - rtype = crypto_tfm_alg_type(engine->cur_req->tfm); > > /* Unt...
2017 Nov 29
0
[PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests
...bool in_kthread) { struct crypto_async_request *async_req, *backlog; - struct ahash_request *hreq; - struct ablkcipher_request *breq; unsigned long flags; bool was_busy = false; - int ret, rtype; + int ret; + struct crypto_engine_reqctx *enginectx; spin_lock_irqsave(&engine->queue_lock, flags); @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, spin_unlock_irqrestore(&engine->queue_lock, flags); - rtype = crypto_tfm_alg_type(engine->cur_req->tfm); /* Until here we get the request need to be encrypted successfully */ if (!was_...
2018 Jan 26
0
[PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests
...lized + * @err: error number + */ +static void crypto_finalize_request(struct crypto_engine *engine, + struct crypto_async_request *req, int err) +{ + unsigned long flags; + bool finalize_cur_req = false; + int ret; + struct crypto_engine_ctx *enginectx; + + spin_lock_irqsave(&engine->queue_lock, flags); + if (engine->cur_req == req) + finalize_cur_req = true; + spin_unlock_irqrestore(&engine->queue_lock, flags); + + if (finalize_cur_req) { + enginectx = crypto_tfm_ctx(req->tfm); + if (engine->cur_req_prepared && + enginectx->op.unprepare_request) { + r...
2018 Jan 10
1
[PATCH 2/6] crypto: engine - Permit to enqueue all async requests
...equest *async_req, *backlog; > - struct ahash_request *hreq; > - struct ablkcipher_request *breq; > unsigned long flags; > bool was_busy = false; > - int ret, rtype; > + int ret; > + struct crypto_engine_reqctx *enginectx; > > spin_lock_irqsave(&engine->queue_lock, flags); > > @@ -94,7 +92,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, > > spin_unlock_irqrestore(&engine->queue_lock, flags); > > - rtype = crypto_tfm_alg_type(engine->cur_req->tfm); > /* Until here we get the request need to...
2018 Jan 03
0
[PATCH 2/6] crypto: engine - Permit to enqueue all async requests
...bool in_kthread) { struct crypto_async_request *async_req, *backlog; - struct ahash_request *hreq; - struct ablkcipher_request *breq; unsigned long flags; bool was_busy = false; - int ret, rtype; + int ret; + struct crypto_engine_reqctx *enginectx; spin_lock_irqsave(&engine->queue_lock, flags); @@ -94,7 +92,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, spin_unlock_irqrestore(&engine->queue_lock, flags); - rtype = crypto_tfm_alg_type(engine->cur_req->tfm); /* Until here we get the request need to be encrypted successfully */ if (!was_...
2012 Aug 07
4
[PATCH V6 0/2] Improve virtio-blk performance
Hi, all This version reworked on REQ_FLUSH and REQ_FUA support as suggested by Christoph and dropped the block core bits since Jens has picked them up. Fio test shows bio-based IO path gives the following performance improvement: 1) Ramdisk device With bio-based IO path, sequential read/write, random read/write IOPS boost : 28%, 24%, 21%, 16% Latency improvement: 32%,
2012 Aug 07
4
[PATCH V6 0/2] Improve virtio-blk performance
Hi, all This version reworked on REQ_FLUSH and REQ_FUA support as suggested by Christoph and dropped the block core bits since Jens has picked them up. Fio test shows bio-based IO path gives the following performance improvement: 1) Ramdisk device With bio-based IO path, sequential read/write, random read/write IOPS boost : 28%, 24%, 21%, 16% Latency improvement: 32%,
2012 Aug 08
2
[PATCH V7 0/2] Improve virtio-blk performance
Hi, all Changes in v7: - Using vbr->flags to trace request type - Dropped unnecessary struct virtio_blk *vblk parameter - Reuse struct virtblk_req in bio done function - Added performance data on normal SATA device and the reason why make it optional Fio test shows bio-based IO path gives the following performance improvement: 1) Ramdisk device With bio-based IO path, sequential
2012 Aug 08
2
[PATCH V7 0/2] Improve virtio-blk performance
Hi, all Changes in v7: - Using vbr->flags to trace request type - Dropped unnecessary struct virtio_blk *vblk parameter - Reuse struct virtblk_req in bio done function - Added performance data on normal SATA device and the reason why make it optional Fio test shows bio-based IO path gives the following performance improvement: 1) Ramdisk device With bio-based IO path, sequential
2017 Nov 29
9
[PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
Hello The current crypto_engine support only ahash and ablkcipher. My first patch which try to add skcipher was Nacked, it will add too many functions and adding other algs(aead, asymetric_key) will make the situation worst. This patchset remove all algs specific stuff and now only process generic crypto_async_request. The requests handler function pointer are now moved out of struct engine and
2017 Nov 29
9
[PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
Hello The current crypto_engine support only ahash and ablkcipher. My first patch which try to add skcipher was Nacked, it will add too many functions and adding other algs(aead, asymetric_key) will make the situation worst. This patchset remove all algs specific stuff and now only process generic crypto_async_request. The requests handler function pointer are now moved out of struct engine and
2018 Jan 26
10
[PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
Hello The current crypto_engine support only ahash and ablkcipher request. My first patch which try to add skcipher was Nacked, it will add too many functions and adding other algs(aead, asymetric_key) will make the situation worst. This patchset remove all algs specific stuff and now only process generic crypto_async_request. The requests handler function pointer are now moved out of struct
2018 Jan 26
10
[PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
Hello The current crypto_engine support only ahash and ablkcipher request. My first patch which try to add skcipher was Nacked, it will add too many functions and adding other algs(aead, asymetric_key) will make the situation worst. This patchset remove all algs specific stuff and now only process generic crypto_async_request. The requests handler function pointer are now moved out of struct
2012 May 21
6
[RFC PATCH 1/5] block: Introduce q->abort_queue_fn()
When user hot-unplug a disk which is busy serving I/O, __blk_run_queue might be unable to drain all the requests. As a result, the blk_drain_queue() would loop forever and blk_cleanup_queue would not return. So hot-unplug will fail. This patch adds a callback in blk_drain_queue() for low lever driver to abort requests. Currently, this is useful for virtio-blk to do cleanup in hot-unplug. Cc:
2012 May 21
6
[RFC PATCH 1/5] block: Introduce q->abort_queue_fn()
When user hot-unplug a disk which is busy serving I/O, __blk_run_queue might be unable to drain all the requests. As a result, the blk_drain_queue() would loop forever and blk_cleanup_queue would not return. So hot-unplug will fail. This patch adds a callback in blk_drain_queue() for low lever driver to abort requests. Currently, this is useful for virtio-blk to do cleanup in hot-unplug. Cc:
2012 Jun 18
13
[PATCH v2 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve performance. Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16% latency improvement for sequential read/write, random read/write respectively. Asias He (3): block: Introduce __blk_segment_map_sg() helper block: Add blk_bio_map_sg() helper virtio-blk: Add bio-based IO path for virtio-blk