search for: reqs

Displaying 20 results from an estimated 2866 matches for "reqs".

Did you mean: req
2018 Jan 10
1
[PATCH 2/6] crypto: engine - Permit to enqueue all async requests
On 03/01/18 21:11, Corentin Labbe wrote: > The crypto engine could actually only enqueue hash and ablkcipher request. > This patch permit it to enqueue any type of crypto_async_request. > > Signed-off-by: Corentin Labbe <clabbe.montjoie at gmail.com> > --- > crypto/crypto_engine.c | 230 ++++++++++++++++++++++++------------------------ > include/crypto/engine.h | 59
2019 May 07
3
[Qemu-devel] [PATCH v7 2/6] virtio-pmem: Add virtio pmem driver
On 4/25/19 10:00 PM, Pankaj Gupta wrote: > +void host_ack(struct virtqueue *vq) > +{ > + unsigned int len; > + unsigned long flags; > + struct virtio_pmem_request *req, *req_buf; > + struct virtio_pmem *vpmem = vq->vdev->priv; > + > + spin_lock_irqsave(&vpmem->pmem_lock, flags); > + while ((req = virtqueue_get_buf(vq, &len)) != NULL) { > +
2019 May 07
3
[Qemu-devel] [PATCH v7 2/6] virtio-pmem: Add virtio pmem driver
On 4/25/19 10:00 PM, Pankaj Gupta wrote: > +void host_ack(struct virtqueue *vq) > +{ > + unsigned int len; > + unsigned long flags; > + struct virtio_pmem_request *req, *req_buf; > + struct virtio_pmem *vpmem = vq->vdev->priv; > + > + spin_lock_irqsave(&vpmem->pmem_lock, flags); > + while ((req = virtqueue_get_buf(vq, &len)) != NULL) { > +
2012 Aug 16
0
[RFC v1 3/5] VBD: enlarge max segment per request in blkfront
...nghui.duan@intel.com<mailto:ronghui.duan@intel.com>> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c index 73f196c..b4767f5 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -64,6 +64,11 @@ MODULE_PARM_DESC(reqs, "Number of blkback requests to allocate"); static unsigned int log_stats; module_param(log_stats, int, 0644); +struct seg_buf { + unsigned long buf; + unsigned int nsec; +}; + /* * Each outstanding request that we''ve passed to the lower device layers has a * ...
2018 Jan 11
0
[PATCH libdrm] nouveau: Support fence FDs
From: Thierry Reding <treding at nvidia.com> Add a new nouveau_pushbuf_kick_fence() function that takes and emits a sync fence FD. The fence FD can be waited on, or merged with other fence FDs, or passed back to the kernel as a prerequisite for a subsequent HW operation. Based heavily on work by Lauri Peltonen <lpeltonen at nvidia.com> Signed-off-by: Thierry Reding <treding at
2018 Jan 26
10
[PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
Hello The current crypto_engine support only ahash and ablkcipher request. My first patch which try to add skcipher was Nacked, it will add too many functions and adding other algs(aead, asymetric_key) will make the situation worst. This patchset remove all algs specific stuff and now only process generic crypto_async_request. The requests handler function pointer are now moved out of struct
2018 Jan 26
10
[PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests
Hello The current crypto_engine support only ahash and ablkcipher request. My first patch which try to add skcipher was Nacked, it will add too many functions and adding other algs(aead, asymetric_key) will make the situation worst. This patchset remove all algs specific stuff and now only process generic crypto_async_request. The requests handler function pointer are now moved out of struct
2005 Apr 02
1
[PATCH] VMX support for MMIO/PIO in VM8086 mode
Memory mapped and port I/O is currently broken under VMX when the partition is running in VM8086 mode. The reason is that the instruction decoding support uses 32-bit opcode/address decodes rather 16-bit decodes. This patch fixes that. In addition, the patch adds support for the "stos" instruction decoding because this is a frequently used way to clear MMIO areas such as the screen. As
2018 Jan 26
0
[PATCH v2 2/6] crypto: engine - Permit to enqueue all async requests
The crypto engine could actually only enqueue hash and ablkcipher request. This patch permit it to enqueue any type of crypto_async_request. Signed-off-by: Corentin Labbe <clabbe.montjoie at gmail.com> Tested-by: Fabien Dessenne <fabien.dessenne at st.com> --- crypto/crypto_engine.c | 301 ++++++++++++++++++++++++++---------------------- include/crypto/engine.h | 68 ++++++----- 2
2007 Jul 09
3
[PATCH 2/3] Virtio draft IV: the block driver
> +static void do_virtblk_request(request_queue_t *q) > +{ > + struct virtio_blk *vblk = NULL; > + struct request *req; > + struct virtblk_req *vbr; > + > + while ((req = elv_next_request(q)) != NULL) { > + vblk = req->rq_disk->private_data; > + > + vbr = mempool_alloc(vblk->pool, GFP_ATOMIC); > + if (!vbr) > + goto stop; > + > +
2007 Jul 09
3
[PATCH 2/3] Virtio draft IV: the block driver
> +static void do_virtblk_request(request_queue_t *q) > +{ > + struct virtio_blk *vblk = NULL; > + struct request *req; > + struct virtblk_req *vbr; > + > + while ((req = elv_next_request(q)) != NULL) { > + vblk = req->rq_disk->private_data; > + > + vbr = mempool_alloc(vblk->pool, GFP_ATOMIC); > + if (!vbr) > + goto stop; > + > +
2017 Nov 29
9
[PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
Hello The current crypto_engine support only ahash and ablkcipher. My first patch which try to add skcipher was Nacked, it will add too many functions and adding other algs(aead, asymetric_key) will make the situation worst. This patchset remove all algs specific stuff and now only process generic crypto_async_request. The requests handler function pointer are now moved out of struct engine and
2017 Nov 29
9
[PATCH RFC 0/4] crypto: engine - Permit to enqueue all async requests
Hello The current crypto_engine support only ahash and ablkcipher. My first patch which try to add skcipher was Nacked, it will add too many functions and adding other algs(aead, asymetric_key) will make the situation worst. This patchset remove all algs specific stuff and now only process generic crypto_async_request. The requests handler function pointer are now moved out of struct engine and
2018 Jan 03
0
[PATCH 2/6] crypto: engine - Permit to enqueue all async requests
The crypto engine could actually only enqueue hash and ablkcipher request. This patch permit it to enqueue any type of crypto_async_request. Signed-off-by: Corentin Labbe <clabbe.montjoie at gmail.com> --- crypto/crypto_engine.c | 230 ++++++++++++++++++++++++------------------------ include/crypto/engine.h | 59 +++++++------ 2 files changed, 148 insertions(+), 141 deletions(-) diff
2019 Oct 14
0
[PATCH 03/25] crypto: virtio - switch to skcipher API
...-static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm) +static int virtio_crypto_skcipher_init(struct crypto_skcipher *tfm) { - struct virtio_crypto_ablkcipher_ctx *ctx = crypto_tfm_ctx(tfm); + struct virtio_crypto_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request); + crypto_skcipher_set_reqsize(tfm, sizeof(struct virtio_crypto_sym_request)); ctx->tfm = tfm; - ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req; + ctx->enginectx.op.do_one_request = virtio_crypto_skcipher_crypt_req;...
2019 Oct 24
0
[PATCH v2 03/27] crypto: virtio - switch to skcipher API
...-static int virtio_crypto_ablkcipher_init(struct crypto_tfm *tfm) +static int virtio_crypto_skcipher_init(struct crypto_skcipher *tfm) { - struct virtio_crypto_ablkcipher_ctx *ctx = crypto_tfm_ctx(tfm); + struct virtio_crypto_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - tfm->crt_ablkcipher.reqsize = sizeof(struct virtio_crypto_sym_request); + crypto_skcipher_set_reqsize(tfm, sizeof(struct virtio_crypto_sym_request)); ctx->tfm = tfm; - ctx->enginectx.op.do_one_request = virtio_crypto_ablkcipher_crypt_req; + ctx->enginectx.op.do_one_request = virtio_crypto_skcipher_crypt_req;...
2012 Nov 19
1
[PATCH] vhost-blk: Add vhost-blk support v5
...ruct vhost_blk *blk; + + struct iovec *iov; + int iov_nr; + + struct bio **bio; + atomic_t bio_nr; + + struct iovec status[1]; + + sector_t sector; + int write; + u16 head; + long len; +}; + +struct vhost_blk { + struct task_struct *host_kick; + struct iovec iov[UIO_MAXIOV]; + struct vhost_blk_req *reqs; + struct vhost_virtqueue vq; + struct llist_head llhead; + struct vhost_dev dev; + u16 reqs_nr; + int index; +}; + +static int move_iovec(struct iovec *from, struct iovec *to, + size_t len, int iov_count) +{ + int seg = 0; + size_t size; + + while (len && seg < iov_count) { + if...
2012 Nov 19
1
[PATCH] vhost-blk: Add vhost-blk support v5
...ruct vhost_blk *blk; + + struct iovec *iov; + int iov_nr; + + struct bio **bio; + atomic_t bio_nr; + + struct iovec status[1]; + + sector_t sector; + int write; + u16 head; + long len; +}; + +struct vhost_blk { + struct task_struct *host_kick; + struct iovec iov[UIO_MAXIOV]; + struct vhost_blk_req *reqs; + struct vhost_virtqueue vq; + struct llist_head llhead; + struct vhost_dev dev; + u16 reqs_nr; + int index; +}; + +static int move_iovec(struct iovec *from, struct iovec *to, + size_t len, int iov_count) +{ + int seg = 0; + size_t size; + + while (len && seg < iov_count) { + if...
2018 Jan 03
11
[PATCH 0/6] crypto: engine - Permit to enqueue all async requests
Hello The current crypto_engine support only ahash and ablkcipher request. My first patch which try to add skcipher was Nacked, it will add too many functions and adding other algs(aead, asymetric_key) will make the situation worst. This patchset remove all algs specific stuff and now only process generic crypto_async_request. The requests handler function pointer are now moved out of struct
2012 Oct 15
2
[PATCH 1/1] vhost-blk: Add vhost-blk support v4
...ode; + struct req_page_list *pl; + struct vhost_blk *blk; + + struct iovec *iov; + int iov_nr; + + struct bio **bio; + atomic_t bio_nr; + + sector_t sector; + int write; + u16 head; + long len; + + u8 __user *status; +}; + +struct vhost_blk { + struct task_struct *host_kick; + struct vhost_blk_req *reqs; + struct vhost_virtqueue vq; + struct llist_head llhead; + struct vhost_dev dev; + u16 reqs_nr; + int index; +}; + +static inline int iov_num_pages(struct iovec *iov) +{ + return (PAGE_ALIGN((unsigned long)iov->iov_base + iov->iov_len) - + ((unsigned long)iov->iov_base & PAGE_M...