search for: nrings

Displaying 18 results from an estimated 18 matches for "nrings".

Did you mean: rings
2016 Jun 22
2
[PATCH net-next V2] tun: introduce tx skb ring
...eue(r, queue, size, gfp, destroy); + spin_unlock_irqrestore(&(r)->producer_lock, flags); kfree(old); @@ -387,6 +398,49 @@ static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp, return 0; } +static inline int ptr_ring_resize_multiple(struct ptr_ring **rings, int nrings, + int size, + gfp_t gfp, void (*destroy)(void *)) +{ + unsigned long flags; + void ***queues; + int i; + + queues = kmalloc(nrings * sizeof *queues, gfp); + if (!queues) + goto noqueues; + + for (i = 0; i < nrings; ++i) { + queues[i] = __ptr_ring_init_queue_alloc(size, gfp); +...
2016 Jun 22
2
[PATCH net-next V2] tun: introduce tx skb ring
...eue(r, queue, size, gfp, destroy); + spin_unlock_irqrestore(&(r)->producer_lock, flags); kfree(old); @@ -387,6 +398,49 @@ static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp, return 0; } +static inline int ptr_ring_resize_multiple(struct ptr_ring **rings, int nrings, + int size, + gfp_t gfp, void (*destroy)(void *)) +{ + unsigned long flags; + void ***queues; + int i; + + queues = kmalloc(nrings * sizeof *queues, gfp); + if (!queues) + goto noqueues; + + for (i = 0; i < nrings; ++i) { + queues[i] = __ptr_ring_init_queue_alloc(size, gfp); +...
2016 Jun 28
1
[PATCH net-next V2] tun: introduce tx skb ring
...t skb_array_resize(struct skb_array *a, int size, gfp_t gfp) +static inline int skb_array_resize(struct skb_array *a, int size, gfp_t gfp) { return ptr_ring_resize(&a->ring, size, gfp, __skb_array_destroy_skb); } +static inline int skb_raay_resize_multiple(struct skb_array **rings, int nrings, + int size, gfp_t gfp) +{ + BUILD_BUG_ON(offsetof(struct skb_array, ring)); + ptr_ring_resize_multiple((struct ptr_ring **)rings, nrings, size, gfp, + __skb_array_destroy_skb); +} + static inline void skb_array_cleanup(struct skb_array *a) { ptr_ring_cleanup(&a->ring, __skb_a...
2016 Jun 28
1
[PATCH net-next V2] tun: introduce tx skb ring
...t skb_array_resize(struct skb_array *a, int size, gfp_t gfp) +static inline int skb_array_resize(struct skb_array *a, int size, gfp_t gfp) { return ptr_ring_resize(&a->ring, size, gfp, __skb_array_destroy_skb); } +static inline int skb_raay_resize_multiple(struct skb_array **rings, int nrings, + int size, gfp_t gfp) +{ + BUILD_BUG_ON(offsetof(struct skb_array, ring)); + ptr_ring_resize_multiple((struct ptr_ring **)rings, nrings, size, gfp, + __skb_array_destroy_skb); +} + static inline void skb_array_cleanup(struct skb_array *a) { ptr_ring_cleanup(&a->ring, __skb_a...
2013 Mar 15
2
[PATCH 0/2] remoteproc : support for host virtio
From: Erwan Yvin <erwan.yvin at stericsson.com> This driver depends on Rusty's new host virtio ring implementation, so this patch-set is based on the vringh branch in Rusty's git. with the vringh wrapper patch on top. They do not apply cleanly on top of the remoteproc virtio config patches from Sjur, but it merges fine. CAIF will use this new host virtio ring implementation. Ido,
2013 Mar 15
2
[PATCH 0/2] remoteproc : support for host virtio
From: Erwan Yvin <erwan.yvin at stericsson.com> This driver depends on Rusty's new host virtio ring implementation, so this patch-set is based on the vringh branch in Rusty's git. with the vringh wrapper patch on top. They do not apply cleanly on top of the remoteproc virtio config patches from Sjur, but it merges fine. CAIF will use this new host virtio ring implementation. Ido,
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
9
[PATCH net-next V3 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2016 Jun 30
10
[PATCH net-next V4 0/6] switch to use tx skb array in tun
Hi all: This series tries to switch to use skb array in tun. This is used to eliminate the spinlock contention between producer and consumer. The conversion was straightforward: just introdce a tx skb array and use it instead of sk_receive_queue. A minor issue is to keep the tx_queue_len behaviour, since tun used to use it for the length of sk_receive_queue. This is done through: - add the
2013 Jan 18
0
[RFC] remoteproc: Add support for host-side (reversed) vrings
...EXPORT_SYMBOL(rproc_vq_interrupt); @@ -149,14 +153,21 @@ static int rproc_virtio_find_vqs(struct virtio_device *vdev, unsigned nvqs, const char *names[]) { struct rproc *rproc = vdev_to_rproc(vdev); - int i, ret; + struct rproc_vdev *rvdev = vdev_to_rvdev(vdev); + int rng, id, ret, nrings = ARRAY_SIZE(rvdev->vring); + + for (id = 0, rng = 0; rng < nrings; ++rng) { + struct rproc_vring *rvring = &rvdev->vring[rng]; + /* Skip the host side rings */ + if (rvring->vringh) + continue; - for (i = 0; i < nvqs; ++i) { - vqs[i] = rp_find_vq(vdev, i, callbacks[i], n...
2013 Jan 18
0
[RFC] remoteproc: Add support for host-side (reversed) vrings
...EXPORT_SYMBOL(rproc_vq_interrupt); @@ -149,14 +153,21 @@ static int rproc_virtio_find_vqs(struct virtio_device *vdev, unsigned nvqs, const char *names[]) { struct rproc *rproc = vdev_to_rproc(vdev); - int i, ret; + struct rproc_vdev *rvdev = vdev_to_rvdev(vdev); + int rng, id, ret, nrings = ARRAY_SIZE(rvdev->vring); + + for (id = 0, rng = 0; rng < nrings; ++rng) { + struct rproc_vring *rvring = &rvdev->vring[rng]; + /* Skip the host side rings */ + if (rvring->vringh) + continue; - for (i = 0; i < nvqs; ++i) { - vqs[i] = rp_find_vq(vdev, i, callbacks[i], n...
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less efficient since it requires spinlocks to synchronize between producer and consumer. This patch tries to address this by: - introduce a new mode which will be only enabled with IFF_TX_ARRAY set and switch from sk_receive_queue to a fixed size of skb array with 256 entries in this mode. - introduce a new proto_ops peek_len which was
2016 Jun 15
7
[PATCH net-next V2] tun: introduce tx skb ring
We used to queue tx packets in sk_receive_queue, this is less efficient since it requires spinlocks to synchronize between producer and consumer. This patch tries to address this by: - introduce a new mode which will be only enabled with IFF_TX_ARRAY set and switch from sk_receive_queue to a fixed size of skb array with 256 entries in this mode. - introduce a new proto_ops peek_len which was
2013 Feb 12
3
[PATCHv2 vringh 0/3] Introduce CAIF Virtio driver
From: Sjur Br?ndeland <sjur.brandeland at stericsson.com> This driver depends on Rusty's new host virtio ring implementation, so this patch-set is based on the vringh branch in Rusty's git. Changes since V1: - Use the new iov helper functions, and simplify iov handling. However this triggers compile warnings, as it takes struct iov while kernel api uses struct kiov - Introduced
2013 Feb 12
3
[PATCHv2 vringh 0/3] Introduce CAIF Virtio driver
From: Sjur Br?ndeland <sjur.brandeland at stericsson.com> This driver depends on Rusty's new host virtio ring implementation, so this patch-set is based on the vringh branch in Rusty's git. Changes since V1: - Use the new iov helper functions, and simplify iov handling. However this triggers compile warnings, as it takes struct iov while kernel api uses struct kiov - Introduced
2013 Feb 10
3
[PATCH vringh 0/2] Introduce CAIF Virtio driver
From: Sjur Br?ndeland <sjur.brandeland at stericsson.com> This patch-set introduces the CAIF Virtio Link layer driver. This driver depends on Rusty's new host virtio ring implementation, so this patch-set is based on the vringh branch in Rusty's git. Regards, Sjur cc: Rusty Russell <rusty at rustcorp.com.au> cc: Ohad Ben-Cohen <ohad at wizery.com> cc: David S. Miller
2013 Feb 10
3
[PATCH vringh 0/2] Introduce CAIF Virtio driver
From: Sjur Br?ndeland <sjur.brandeland at stericsson.com> This patch-set introduces the CAIF Virtio Link layer driver. This driver depends on Rusty's new host virtio ring implementation, so this patch-set is based on the vringh branch in Rusty's git. Regards, Sjur cc: Rusty Russell <rusty at rustcorp.com.au> cc: Ohad Ben-Cohen <ohad at wizery.com> cc: David S. Miller