search for: sg_next

Displaying 20 results from an estimated 217 matches for "sg_next".

2023 Mar 21
1
[PATCH vhost v3 01/11] virtio_ring: split: separate dma codes
...gs(struct vring_virtqueue *vq, + struct scatterlist *sgs[], + unsigned int total_sg, + unsigned int out_sgs, + unsigned int in_sgs) +{ + struct scatterlist *sg; + unsigned int n; + + if (!vq->use_dma_api) + return; + + for (n = 0; n < out_sgs; n++) { + for (sg = sgs[n]; sg; sg = sg_next(sg)) { + if (!sg->dma_address) + return; + + dma_unmap_page(vring_dma_dev(vq), sg->dma_address, + sg->length, DMA_TO_DEVICE); + } + } + + for (; n < (out_sgs + in_sgs); n++) { + for (sg = sgs[n]; sg; sg = sg_next(sg)) { + if (!sg->dma_address) + return; + +...
2013 Jun 24
3
[konrad.wilk@oracle.com: [PATCH] drm/i915: make compact dma scatter lists creation work with SWIOTLB backend.]
...801,14 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) gfp |= __GFP_NORETRY | __GFP_NOWARN | __GFP_NO_KSWAPD; gfp &= ~(__GFP_IO | __GFP_WAIT); } - +#ifdef CONFIG_SWIOTLB + if (swiotlb_nr_tbl()) { + st->nents++; + sg_set_page(sg, page, PAGE_SIZE, 0); + sg = sg_next(sg); + continue; + } +#endif if (!i || page_to_pfn(page) != last_pfn + 1) { if (i) sg = sg_next(sg); @@ -1812,8 +1819,10 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) } last_pfn = page_to_pfn(page); } - - sg_mark_end(sg); +#ifdef CONFIG_SWIOTLB + if (!swio...
2023 Mar 02
1
[PATCH vhost v1 02/12] virtio_ring: split: separate DMA codes
...atic int virtqueue_map_sgs(struct vring_virtqueue *vq, + struct scatterlist *sgs[], + unsigned int total_sg, + unsigned int out_sgs, + unsigned int in_sgs) +{ + struct scatterlist *sg; + unsigned int n; + + for (n = 0; n < out_sgs; n++) { + for (sg = sgs[n]; sg; sg = sg_next(sg)) { + dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE); + + if (vring_mapping_error(vq, addr)) + return -ENOMEM; + + sg->dma_address = addr; + } + } + + for (; n < (out_sgs + in_sgs); n++) { + for (sg = sgs[n]; sg; sg = sg_next(sg)) { + dma_addr_t addr = vring_map_one...
2014 Sep 01
2
[PATCH 1/3] virtio_ring: Remove sg_next indirection
...1]); + /* chain first in list head */ first->private = (unsigned long)list; err = virtqueue_add_inbuf(rq->vq, rq->sg, MAX_SKB_FRAGS + 2, Subject: virtio_ring: assume sgs are always well-formed. We used to have several callers which just used arrays. They're gone, so we can use sg_next() everywhere, simplifying the code. Before: gcc 4.8.2: virtio_blk: stack used = 392 gcc 4.6.4: virtio_blk: stack used = 528 After: gcc 4.8.2: virtio_blk: stack used = 392 gcc 4.6.4: virtio_blk: stack used = 480 vring_bench before: 936153354-967745359(9.44739e+08+/-6.1e+06)ns vring_bench aft...
2014 Sep 01
2
[PATCH 1/3] virtio_ring: Remove sg_next indirection
...1]); + /* chain first in list head */ first->private = (unsigned long)list; err = virtqueue_add_inbuf(rq->vq, rq->sg, MAX_SKB_FRAGS + 2, Subject: virtio_ring: assume sgs are always well-formed. We used to have several callers which just used arrays. They're gone, so we can use sg_next() everywhere, simplifying the code. Before: gcc 4.8.2: virtio_blk: stack used = 392 gcc 4.6.4: virtio_blk: stack used = 528 After: gcc 4.8.2: virtio_blk: stack used = 392 gcc 4.6.4: virtio_blk: stack used = 480 vring_bench before: 936153354-967745359(9.44739e+08+/-6.1e+06)ns vring_bench aft...
2014 Sep 01
0
[PATCH 1/3] virtio_ring: Remove sg_next indirection
...ually uses a weirdly-formed sg now, > and that's virtio_net. It's pretty trivial to fix. > > However, vring_bench drops 15% when we do this. There's a larger > question as to how much difference that makes in Real Life, of course. > I'll measure that today. Weird. sg_next shouldn't be nearly that slow. Weird. > > Here are my two patches, back-to-back (it cam out of of an earlier > concern about reducing stack usage, hence the stack measurements). > I like your version better than mine, except that I suspect that your version will blow up for the s...
2023 Feb 20
2
[PATCH vhost 01/10] virtio_ring: split: refactor virtqueue_add_split() for premapped
...struct vring_desc *desc; > - unsigned int i, n, avail, descs_used, prev, err_idx; > - int head; > - bool indirect; > + unsigned int n; > > - START_USE(vq); > + for (n = 0; n < out_sgs; n++) { > + for (sg = sgs[n]; sg; sg = sg_next(sg)) { > + dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE); > + > + if (vring_mapping_error(vq, addr)) > + return -ENOMEM; > + > + sg->dma_address = addr; > +...
2014 Aug 26
0
[PATCH 1/3] virtio_ring: Remove sg_next indirection
...tio_ring.c b/drivers/virtio/virtio_ring.c index 4d08f45a9c29..d356a701c9c2 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -99,25 +99,9 @@ struct vring_virtqueue #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq) -static inline struct scatterlist *sg_next_chained(struct scatterlist *sg, - unsigned int *count) -{ - return sg_next(sg); -} - -static inline struct scatterlist *sg_next_arr(struct scatterlist *sg, - unsigned int *count) -{ - if (--(*count) == 0) - return NULL; - return sg + 1; -} - /* Set up an indirect table of descrip...
2014 Sep 03
0
[PATCH 2/3] virtio_ring: assume sgs are always well-formed.
We used to have several callers which just used arrays. They're gone, so we can use sg_next() everywhere, simplifying the code. On my laptop, this slowed down vring_bench by 15%: vring_bench before: 936153354-967745359(9.44739e+08+/-6.1e+06)ns vring_bench after: 1061485790-1104800648(1.08254e+09+/-6.6e+06)ns However, a more realistic test using pktgen on a AMD FX(tm)-8320 saw a few p...
2023 May 17
2
[PATCH vhost v9 01/12] virtio_ring: put mapping error check in vring_map_one_sg
..._mapping_error(vring_dma_dev(vq), *addr)) + return -ENOMEM; + + return 0; } static dma_addr_t vring_map_single(const struct vring_virtqueue *vq, @@ -588,8 +593,9 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, for (n = 0; n < out_sgs; n++) { for (sg = sgs[n]; sg; sg = sg_next(sg)) { - dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE); - if (vring_mapping_error(vq, addr)) + dma_addr_t addr; + + if (vring_map_one_sg(vq, sg, DMA_TO_DEVICE, &addr)) goto unmap_release; prev = i; @@ -603,8 +609,9 @@ static inline int virtqueue_add_split(struct v...
2023 Mar 22
1
[PATCH vhost v4 01/11] virtio_ring: split: separate dma codes
...gs(struct vring_virtqueue *vq, + struct scatterlist *sgs[], + unsigned int total_sg, + unsigned int out_sgs, + unsigned int in_sgs) +{ + struct scatterlist *sg; + unsigned int n; + + if (!vq->use_dma_api) + return; + + for (n = 0; n < out_sgs; n++) { + for (sg = sgs[n]; sg; sg = sg_next(sg)) { + if (!sg->dma_address) + return; + + dma_unmap_page(vring_dma_dev(vq), sg->dma_address, + sg->length, DMA_TO_DEVICE); + } + } + + for (; n < (out_sgs + in_sgs); n++) { + for (sg = sgs[n]; sg; sg = sg_next(sg)) { + if (!sg->dma_address) + return; + +...
2023 Apr 25
1
[PATCH vhost v7 01/11] virtio_ring: split: separate dma codes
...gs(struct vring_virtqueue *vq, + struct scatterlist *sgs[], + unsigned int total_sg, + unsigned int out_sgs, + unsigned int in_sgs) +{ + struct scatterlist *sg; + unsigned int n; + + if (!vq->use_dma_api) + return; + + for (n = 0; n < out_sgs; n++) { + for (sg = sgs[n]; sg; sg = sg_next(sg)) { + if (!sg->dma_address) + return; + + dma_unmap_page(vring_dma_dev(vq), sg->dma_address, + sg->length, DMA_TO_DEVICE); + } + } + + for (; n < (out_sgs + in_sgs); n++) { + for (sg = sgs[n]; sg; sg = sg_next(sg)) { + if (!sg->dma_address) + return; + +...
2014 Sep 01
1
[PATCH 1/3] virtio_ring: Remove sg_next indirection
...apply it anyway? It's the virtio_ring changes that we need to worry about. > > However, vring_bench drops 15% when we do this. There's a larger > > question as to how much difference that makes in Real Life, of course. > > I'll measure that today. > > Weird. sg_next shouldn't be nearly that slow. Weird. I think that's down to the fact that it's out of line, so it prevents inlining of the caller. > > > > Here are my two patches, back-to-back (it cam out of of an earlier > > concern about reducing stack usage, hence the stack mea...
2014 Sep 01
1
[PATCH 1/3] virtio_ring: Remove sg_next indirection
...apply it anyway? It's the virtio_ring changes that we need to worry about. > > However, vring_bench drops 15% when we do this. There's a larger > > question as to how much difference that makes in Real Life, of course. > > I'll measure that today. > > Weird. sg_next shouldn't be nearly that slow. Weird. I think that's down to the fact that it's out of line, so it prevents inlining of the caller. > > > > Here are my two patches, back-to-back (it cam out of of an earlier > > concern about reducing stack usage, hence the stack mea...
2013 Mar 06
7
[PATCH 0/6] virtio_add_buf replacement.
OK, so I've spent a few days benchmarking. Turns out 80% of virtio_add_buf cases are uni-directional (including the always-performance-sensitive networking code), and that gets no performance penalty (though tests with real networking would be appreciated!). I'm not reposting all the "convert driver to virtio_add_outbuf()" patches: just the scsi one which I didn't have
2013 Mar 06
7
[PATCH 0/6] virtio_add_buf replacement.
OK, so I've spent a few days benchmarking. Turns out 80% of virtio_add_buf cases are uni-directional (including the always-performance-sensitive networking code), and that gets no performance penalty (though tests with real networking would be appreciated!). I'm not reposting all the "convert driver to virtio_add_outbuf()" patches: just the scsi one which I didn't have
2014 Sep 03
8
[PATCH 0/3] virtio: simplify virtio_ring.
I resurrected these patches after prompting from Andy Lutomirski's recent patches. I put them on the back-burner because vring_bench had a 15% slowdown on my laptop: pktgen testing revealed a speedup, if anything, so I've cleaned them up. Rusty Russell (3): virtio_net: pass well-formed sgs to virtqueue_add_*() virtio_ring: assume sgs are always well-formed. virtio_ring: unify
2014 Sep 03
8
[PATCH 0/3] virtio: simplify virtio_ring.
I resurrected these patches after prompting from Andy Lutomirski's recent patches. I put them on the back-burner because vring_bench had a 15% slowdown on my laptop: pktgen testing revealed a speedup, if anything, so I've cleaned them up. Rusty Russell (3): virtio_net: pass well-formed sgs to virtqueue_add_*() virtio_ring: assume sgs are always well-formed. virtio_ring: unify
2020 May 25
0
[PATCH 1/2] crypto: virtio: fix src/dst scatterlist calculation
..., Longpeng(Mike) wrote: > The system will crash when we insmod crypto/tcrypt.ko whit mode=38. > > Usually the next entry of one sg will be @sg@ + 1, but if this sg element > is part of a chained scatterlist, it could jump to the start of a new > scatterlist array. Let's fix it by sg_next() on calculation of src/dst > scatterlist. > > BTW I add a check for sg_nents_for_len() its return value since > sg_nents_for_len() function could fail. > > Cc: Gonglei <arei.gonglei at huawei.com> > Cc: Herbert Xu <herbert at gondor.apana.org.au> > Cc: "Mic...
2020 Jun 16
0
[PATCH 5.7 095/163] crypto: virtio: Fix src/dst scatterlist calculation in __virtio_crypto_skcipher_do_req()
...2 upstream. The system will crash when the users insmod crypto/tcrypt.ko with mode=38 ( testing "cts(cbc(aes))" ). Usually the next entry of one sg will be @sg@ + 1, but if this sg element is part of a chained scatterlist, it could jump to the start of a new scatterlist array. Fix it by sg_next() on calculation of src/dst scatterlist. Fixes: dbaf0624ffa5 ("crypto: add virtio-crypto driver") Reported-by: LABBE Corentin <clabbe at baylibre.com> Cc: Herbert Xu <herbert at gondor.apana.org.au> Cc: "Michael S. Tsirkin" <mst at redhat.com> Cc: Jason Wang &...