search for: nseg

Displaying 20 results from an estimated 44 matches for "nseg".

Did you mean: nsec
2013 Jun 21
5
[PATCH 3/4] xen-blkback: check the number of iovecs before allocating a bios
...ck.c index d622d86..876116b 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -1236,7 +1236,8 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif, seg[i].nsec << 9, seg[i].offset) == 0)) { - bio = bio_alloc(GFP_KERNEL, nseg-i); + int nr_iovecs = (nseg-i) > BIO_MAX_PAGES ? BIO_MAX_PAGES : (nseg-i); + bio = bio_alloc(GFP_KERNEL, nr_iovecs); if (unlikely(bio == NULL)) goto fail_put_bio; -- 1.7.7.5 (Apple Git-26) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xe...
2006 Aug 11
2
Colour-coding intervals on a line
Hi, This is a simple version of something that I am trying to do. If I can sort the problem basically, I figure I should be able to sort it for the program I'm writing (which would take longer to explain). I need to know if there is any way of using different colours for different intervals of a line on a graph. Eg. If I plot the line y=x for x=1:10, and split this line into 106 intervals
2012 Aug 16
0
[RFC v1 3/5] VBD: enlarge max segment per request in blkfront
...*req, + struct blkif_request_segment *seg_req, struct pending_req *pending_req, struct seg_buf seg[]) { - struct gnttab_map_grant_ref map[BLKIF_MAX_SEGMENTS_PER_REQUEST]; + struct gnttab_map_grant_ref *map = pending_req->map; int i; int nseg = req->u.rw.nr_segments; int ret = 0; @@ -362,7 +391,7 @@ static int xen_blkbk_map(struct blkif_request *req, if (pending_req->operation != BLKIF_OP_READ) flags |= GNTMAP_readonly; gnttab_set_map_op(&map[i], vaddr(pending_req, i), flags, -...
2006 Aug 24
1
block ring interface: nr_segments = 0 results in BLKIF_RSP_ERROR
I am currently developing a blkfront.c for a custom OS over Xen 3.0.2-2. Typical I/O is working, however, I ran into an error while testing a corner case. On standard I/O, where { 1 <= nr_segments < BLKIF_MAX_SEGMENTS_PER_REQUEST } blkif_int()''s bret->status returns BLKIF_RSP_OKAY. Yet when { nr_segments == 0 } blkif_int''s bret->status is non-zero. (Yes
2012 Jul 13
5
[PATCH V3 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve performance. Fio test shows bio-based IO path gives the following performance improvement: 1) Ramdisk device With bio-based IO path, sequential read/write, random read/write IOPS boost : 28%, 24%, 21%, 16% Latency improvement: 32%, 17%, 21%, 16% 2) Fusion IO device With bio-based IO path, sequential
2012 Jul 13
5
[PATCH V3 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve performance. Fio test shows bio-based IO path gives the following performance improvement: 1) Ramdisk device With bio-based IO path, sequential read/write, random read/write IOPS boost : 28%, 24%, 21%, 16% Latency improvement: 32%, 17%, 21%, 16% 2) Fusion IO device With bio-based IO path, sequential
2012 Jun 18
13
[PATCH v2 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve performance. Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16% latency improvement for sequential read/write, random read/write respectively. Asias He (3): block: Introduce __blk_segment_map_sg() helper block: Add blk_bio_map_sg() helper virtio-blk: Add bio-based IO path for virtio-blk
2012 Jun 18
13
[PATCH v2 0/3] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve performance. Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16% latency improvement for sequential read/write, random read/write respectively. Asias He (3): block: Introduce __blk_segment_map_sg() helper block: Add blk_bio_map_sg() helper virtio-blk: Add bio-based IO path for virtio-blk
2008 Nov 05
0
[PATCH] blktap: ensure vma->vm_mm''s mmap_sem is being held whenever it is being modified
...t16_t mmap_idx = pending_req->mem_idx; + struct mm_struct *mm; if (blkif->dev_num < 0 || blkif->dev_num > MAX_TAP_DEV) goto fail_response; @@ -1416,6 +1434,9 @@ static void dispatch_rw_block_io(blkif_t pending_req->status = BLKIF_RSP_OKAY; pending_req->nr_pages = nseg; op = 0; + mm = info->vma->vm_mm; + if (!xen_feature(XENFEAT_auto_translated_physmap)) + down_write(&mm->mmap_sem); for (i = 0; i < nseg; i++) { unsigned long uvaddr; unsigned long kvaddr; @@ -1434,9 +1455,9 @@ static void dispatch_rw_block_io(blkif_t if (!xen_featur...
2005 Nov 06
2
Bug in use of grant tables in blkback.c error path?
In dispatch_rw_block_io after a call to HYPERVISOR_grant_table_op, there is the following code which calls fast_flush_area and breaks out of the loop early if one of the handles returned from HYPERVISOR_grant_table_op is negative: for (i = 0; i < nseg; i++) { if (unlikely(map[i].handle < 0)) { DPRINTK("invalid buffer -- could not remap it\n"); fast_flush_area(pending_idx, nseg); goto bad_descriptor; } phys_to_machine_mapping[__pa(MMAP_VADDR( pending_idx, i)) >> PAGE_SHIFT] = FOREIGN_FRAME(map[i].dev_bus_a...
2012 Aug 02
9
[PATCH V5 0/4] Improve virtio-blk performance
Hi folks, This version added REQ_FLUSH and REQ_FUA support as suggested by Christoph and rebased against latest linus's tree. Jens, could you please consider picking up the dependencies 1/4 and 2/4 in your tree. Thanks! This patchset implements bio-based IO path for virito-blk to improve performance. Fio test shows bio-based IO path gives the following performance improvement: 1) Ramdisk
2012 Aug 02
9
[PATCH V5 0/4] Improve virtio-blk performance
Hi folks, This version added REQ_FLUSH and REQ_FUA support as suggested by Christoph and rebased against latest linus's tree. Jens, could you please consider picking up the dependencies 1/4 and 2/4 in your tree. Thanks! This patchset implements bio-based IO path for virito-blk to improve performance. Fio test shows bio-based IO path gives the following performance improvement: 1) Ramdisk
2011 Sep 09
7
[PATCH] xen-blk[front|back] FUA additions.
I am proposing these two patches for 3.2. They allow the backend to process the REQ_FUA request as well. Previous to these patches it only did REQ_FLUSH. There is also a bug-fix for the logic of how barrier/flushes were handled. The patches are based on a branch which also has ''feature-discard'' patches, so they won''t apply nativly on top of 3.1-rc5. Please review and
2013 Mar 27
0
[PATCH 04/22] block: Convert bio_for_each_segment() to bvec_iter
...ue *q, struct bio *bio, return 0; } -static void +static inline void __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, - struct scatterlist *sglist, struct bio_vec **bvprv, + struct scatterlist *sglist, struct bio_vec *bvprv, struct scatterlist **sg, int *nsegs, int *cluster) { int nbytes = bvec->bv_len; - if (*bvprv && *cluster) { + if (*sg && *cluster) { if ((*sg)->length + nbytes > queue_max_segment_size(q)) goto new_segment; - if (!BIOVEC_PHYS_MERGEABLE(*bvprv, bvec)) + if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec...
2013 Mar 27
0
[PATCH 04/22] block: Convert bio_for_each_segment() to bvec_iter
...ue *q, struct bio *bio, return 0; } -static void +static inline void __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, - struct scatterlist *sglist, struct bio_vec **bvprv, + struct scatterlist *sglist, struct bio_vec *bvprv, struct scatterlist **sg, int *nsegs, int *cluster) { int nbytes = bvec->bv_len; - if (*bvprv && *cluster) { + if (*sg && *cluster) { if ((*sg)->length + nbytes > queue_max_segment_size(q)) goto new_segment; - if (!BIOVEC_PHYS_MERGEABLE(*bvprv, bvec)) + if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec...
2012 Jun 13
4
[PATCH RFC 0/2] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve performance. Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16% latency improvement for sequential read/write, random read/write respectively. Asias He (2): block: Add blk_bio_map_sg() helper virtio-blk: Add bio-based IO path for virtio-blk block/blk-merge.c | 63 ++++++++++++++
2012 Jun 13
4
[PATCH RFC 0/2] Improve virtio-blk performance
This patchset implements bio-based IO path for virito-blk to improve performance. Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16% latency improvement for sequential read/write, random read/write respectively. Asias He (2): block: Add blk_bio_map_sg() helper virtio-blk: Add bio-based IO path for virtio-blk block/blk-merge.c | 63 ++++++++++++++
2013 Aug 07
0
[PATCH 07/22] block: Convert bio_for_each_segment() to bvec_iter
...ue *q, struct bio *bio, return 0; } -static void +static inline void __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, - struct scatterlist *sglist, struct bio_vec **bvprv, + struct scatterlist *sglist, struct bio_vec *bvprv, struct scatterlist **sg, int *nsegs, int *cluster) { int nbytes = bvec->bv_len; - if (*bvprv && *cluster) { + if (*sg && *cluster) { if ((*sg)->length + nbytes > queue_max_segment_size(q)) goto new_segment; - if (!BIOVEC_PHYS_MERGEABLE(*bvprv, bvec)) + if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec...
2013 Aug 07
0
[PATCH 07/22] block: Convert bio_for_each_segment() to bvec_iter
...ue *q, struct bio *bio, return 0; } -static void +static inline void __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, - struct scatterlist *sglist, struct bio_vec **bvprv, + struct scatterlist *sglist, struct bio_vec *bvprv, struct scatterlist **sg, int *nsegs, int *cluster) { int nbytes = bvec->bv_len; - if (*bvprv && *cluster) { + if (*sg && *cluster) { if ((*sg)->length + nbytes > queue_max_segment_size(q)) goto new_segment; - if (!BIOVEC_PHYS_MERGEABLE(*bvprv, bvec)) + if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec...
2013 Aug 07
0
[PATCH 07/22] block: Convert bio_for_each_segment() to bvec_iter
...ue *q, struct bio *bio, return 0; } -static void +static inline void __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, - struct scatterlist *sglist, struct bio_vec **bvprv, + struct scatterlist *sglist, struct bio_vec *bvprv, struct scatterlist **sg, int *nsegs, int *cluster) { int nbytes = bvec->bv_len; - if (*bvprv && *cluster) { + if (*sg && *cluster) { if ((*sg)->length + nbytes > queue_max_segment_size(q)) goto new_segment; - if (!BIOVEC_PHYS_MERGEABLE(*bvprv, bvec)) + if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec...