search for: blkif_max_segments_per_request

Displaying 20 results from an estimated 54 matches for "blkif_max_segments_per_request".

2012 Feb 25
9
[xen-unstable bisection] complete test-amd64-i386-rhel6hvm-amd
...a59c1dcfe968 user: Justin T. Gibbs <justing@spectralogic.com> date: Thu Feb 23 10:03:07 2012 +0000 blkif.h: Define and document the request number/size/segments extension Note: As of __XEN_INTERFACE_VERSION__ 0x00040201 the definition of BLKIF_MAX_SEGMENTS_PER_REQUEST has changed. Drivers must be updated to, at minimum, use BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK, before being recompiled with a __XEN_INTERFACE_VERSION greater than or equal to this value. This extension first appeared in the FreeBSD Operating System....
2013 Feb 28
1
[PATCH RFC 09/12] xen-blkback: move pending handles list from blkbk to pending_req
...drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c index ba27fc3..c43de8a 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -136,6 +136,7 @@ struct pending_req { struct list_head free_list; struct persistent_gnt *persistent_gnts[BLKIF_MAX_SEGMENTS_PER_REQUEST]; struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST]; + grant_handle_t grant_handles[BLKIF_MAX_SEGMENTS_PER_REQUEST]; }; #define BLKBACK_INVALID_HANDLE (~0) @@ -147,8 +148,6 @@ struct xen_blkbk { /* And its spinlock. */ spinlock_t pending_free_lock; wait_queue_head_t pending_free_wq;...
2012 Sep 19
27
[PATCH] Persistent grant maps for xen blk drivers
...the maximum sensible size. This introduces a maximum overhead of 11MB of mapped memory, per block device. In practice, we don''t typically use about 60 of these. If the guest exceeds the 256 limit, it is either buggy or malicious. We treat this in one of two ways: 1) If we have mapped < BLKIF_MAX_SEGMENTS_PER_REQUEST * BLKIF_MAX_PERS_REQUESTS_PER_DEV pages, we will persistently map the grefs. This can occur is previous requests have not used all BLKIF_MAX_SEGMENTS_PER_REQUEST segments. 2) Otherwise, we revert to non-persistent grants for all future grefs. In writing this patch, the question arrises as to if th...
2012 Aug 16
0
[RFC v1 3/5] VBD: enlarge max segment per request in blkfront
...) wake_up(&blkbk->pending_free_wq); } - +/* + * Retrieve from the ''pending_reqs'' a free pending_req structure to be used. + */ +static struct pending_req *alloc_req(void) +{ + struct pending_req *req = NULL; + unsigned long flags; + unsigned int max_seg = BLKIF_MAX_SEGMENTS_PER_REQUEST; + + spin_lock_irqsave(&blkbk->pending_free_lock, flags); + if (!list_empty(&blkbk->pending_free)) { + req = list_entry(blkbk->pending_free.next, struct pending_req, + free_list); + list_del(&req->free_list); + } + spin_unlock...
2006 Aug 24
1
block ring interface: nr_segments = 0 results in BLKIF_RSP_ERROR
I am currently developing a blkfront.c for a custom OS over Xen 3.0.2-2. Typical I/O is working, however, I ran into an error while testing a corner case. On standard I/O, where { 1 <= nr_segments < BLKIF_MAX_SEGMENTS_PER_REQUEST } blkif_int()''s bret->status returns BLKIF_RSP_OKAY. Yet when { nr_segments == 0 } blkif_int''s bret->status is non-zero. (Yes I realize this is an I/O call of zero-length.) I checked the documentation and section "8.2.2 Block ring interface" states the...
2011 Sep 01
9
[PATCH V4 0/3] xen-blkfront/blkback discard support
Dear list, This is the V4 of the trim support for xen-blkfront/blkback, Now we move BLKIF_OP_TRIM to BLKIF_OP_DISCARD, and dropped all "trim" stuffs in the patches, and use "discard" instead. Also we updated the helpers of blkif_x86_{32|64}_request or we will meet problems using a non-native protocol. And this patch has been tested with both SSD and raw file, with SSD we will
2013 Feb 28
0
[PATCH RFC 05/12] xen-blkfront: remove frame list from blk_shadow
...--git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 5ba6b87..4d81fcc 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -74,7 +74,6 @@ struct grant { struct blk_shadow { struct blkif_request req; struct request *request; - unsigned long frame[BLKIF_MAX_SEGMENTS_PER_REQUEST]; struct grant *grants_used[BLKIF_MAX_SEGMENTS_PER_REQUEST]; }; @@ -356,7 +355,6 @@ static int blkif_ioctl(struct block_device *bdev, fmode_t mode, static int blkif_queue_request(struct request *req) { struct blkfront_info *info = req->rq_disk->private_data; - unsigned long buffer_mf...
2012 Nov 02
2
[PATCH] xen-blk: persistent-grants fixes
...>vbd.handle); } - new_map = 1; + new_map = true; pages[i] = blkbk->pending_page(pending_req, i); addr = vaddr(pending_req, i); pages_to_gnt[segs_to_map] = @@ -584,7 +585,8 @@ static int xen_blkbk_map(struct blkif_request *req, */ bitmap_zero(pending_req->unmap_seg, BLKIF_MAX_SEGMENTS_PER_REQUEST); for (i = 0, j = 0; i < nseg; i++) { - if (!persistent_gnts[i] || !persistent_gnts[i]->handle) { + if (!persistent_gnts[i] || + persistent_gnts[i]->handle == BLKBACK_INVALID_HANDLE) { /* This is a newly mapped grant */ BUG_ON(j >= segs_to_map); if (unlikely(map[j]....
2012 Dec 03
1
xen-blkback: move free persistent grants code
...egs_to_unmap] = persistent_gnt->page; 233 rb_erase(&persistent_gnt->node, root); 234 kfree(persistent_gnt); ^^^^^^^^^^^^^^^^^^^^ kfree(); 235 num--; 236 237 if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST || 238 !rb_next(&persistent_gnt->node)) { ^^^^^^^^^^^^^^^^^^^^^ Dereferenced inside the call to rb_next(). 239 ret = gnttab_unmap_refs(unmap, NULL, pages, 240...
2012 Dec 03
1
xen-blkback: move free persistent grants code
...egs_to_unmap] = persistent_gnt->page; 233 rb_erase(&persistent_gnt->node, root); 234 kfree(persistent_gnt); ^^^^^^^^^^^^^^^^^^^^ kfree(); 235 num--; 236 237 if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST || 238 !rb_next(&persistent_gnt->node)) { ^^^^^^^^^^^^^^^^^^^^^ Dereferenced inside the call to rb_next(). 239 ret = gnttab_unmap_refs(unmap, NULL, pages, 240...
2008 Nov 05
0
[PATCH] blktap: ensure vma->vm_mm''s mmap_sem is being held whenever it is being modified
...info->vma->vm_start, info->vma->vm_end - info->vma->vm_start, NULL); + up_write(&mm->mmap_sem); kfree(info->vma->vm_private_data); @@ -993,12 +997,13 @@ static void fast_flush_area(pending_req_ int tapidx) { struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST*2]; - unsigned int i, invcount = 0; + unsigned int i, invcount = 0, locked = 0; struct grant_handle_pair *khandle; uint64_t ptep; int ret, mmap_idx; unsigned long kvaddr, uvaddr; tap_blkif_t *info; + struct mm_struct *mm; info = tapfds[tapidx]; @@ -1008,13 +1013,15 @@ static void f...
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
...r'. They @@ -87,14 +92,15 @@ struct blkfront_info int vdevice; blkif_vdev_t handle; enum blkif_state connected; - int ring_ref; + int ring_ref[XENBUS_MAX_RING_PAGES]; + int ring_order; struct blkif_front_ring ring; struct scatterlist sg[BLKIF_MAX_SEGMENTS_PER_REQUEST]; unsigned int evtchn, irq; struct request_queue *rq; struct work_struct work; struct gnttab_free_callback callback; - struct blk_shadow shadow[BLK_RING_SIZE]; + struct blk_shadow shadow[BLK_MAX_RING_SIZE]; unsigned long shadow_free; unsig...
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
...r'. They @@ -87,14 +92,15 @@ struct blkfront_info int vdevice; blkif_vdev_t handle; enum blkif_state connected; - int ring_ref; + int ring_ref[XENBUS_MAX_RING_PAGES]; + int ring_order; struct blkif_front_ring ring; struct scatterlist sg[BLKIF_MAX_SEGMENTS_PER_REQUEST]; unsigned int evtchn, irq; struct request_queue *rq; struct work_struct work; struct gnttab_free_callback callback; - struct blk_shadow shadow[BLK_RING_SIZE]; + struct blk_shadow shadow[BLK_MAX_RING_SIZE]; unsigned long shadow_free; unsig...
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
...r'. They @@ -87,14 +92,15 @@ struct blkfront_info int vdevice; blkif_vdev_t handle; enum blkif_state connected; - int ring_ref; + int ring_ref[XENBUS_MAX_RING_PAGES]; + int ring_order; struct blkif_front_ring ring; struct scatterlist sg[BLKIF_MAX_SEGMENTS_PER_REQUEST]; unsigned int evtchn, irq; struct request_queue *rq; struct work_struct work; struct gnttab_free_callback callback; - struct blk_shadow shadow[BLK_RING_SIZE]; + struct blk_shadow shadow[BLK_MAX_RING_SIZE]; unsigned long shadow_free; unsig...
2012 Aug 16
0
[RFC v1 5/5] VBD: enlarge max segment per request in blkfront
...lkback/parameters/ */ static unsigned int log_stats; module_param(log_stats, int, 0644); @@ -125,7 +129,7 @@ static struct pending_req *alloc_req(struct xen_blkif *blkif) struct xen_blkbk *blkbk = blkif->blkbk; struct pending_req *req = NULL; unsigned long flags; - unsigned int max_seg = BLKIF_MAX_SEGMENTS_PER_REQUEST; + unsigned int max_seg = blkif->ops->max_seg; spin_lock_irqsave(&blkbk->pending_free_lock, flags); if (!list_empty(&blkbk->pending_free)) { @@ -315,8 +319,10 @@ static void xen_blkbk_unmap(struct pending_req *req) for (i = 0; i < req->nr_pages; i++) { handle...
2013 Jul 15
6
[PATCH 0 of 6 RESEND v2] blktap3/sring: shared ring between tapdisk and the front-end
This patch series introduces the shared ring used by the front-end to pass request descriptors to tapdisk, as well as responses from tapdisk to the front-end. Requests from this ring end up in tapdisk''s standard request queue. When the tapback daemon detects that the front-end tries to connect to the back-end, it spawns a tapdisk and tells it to connect to the shared ring. The shared
2011 Jun 21
13
VM disk I/O limit patch
...return ret_str; +} + static int dispatch_rw_block_io(blkif_t *blkif, blkif_request_t *req, - pending_req_t *pending_req) + pending_req_t *pending_req, + int *done_nr_sects) { extern void ll_rw_block(int rw, int nr, struct buffer_head * bhs[]); struct gnttab_map_grant_ref map[BLKIF_MAX_SEGMENTS_PER_REQUEST]; @@ -426,6 +495,9 @@ static int dispatch_rw_block_io(blkif_t struct bio *bio = NULL; int ret, i; int operation; + struct timeval cur_time; + + *done_nr_sects = 0; switch (req->operation) { case BLKIF_OP_READ: @@ -582,6 +654,12 @@ static int dispatch_rw_block_io(blkif_t else if (op...
2011 Jun 21
13
VM disk I/O limit patch
...return ret_str; +} + static int dispatch_rw_block_io(blkif_t *blkif, blkif_request_t *req, - pending_req_t *pending_req) + pending_req_t *pending_req, + int *done_nr_sects) { extern void ll_rw_block(int rw, int nr, struct buffer_head * bhs[]); struct gnttab_map_grant_ref map[BLKIF_MAX_SEGMENTS_PER_REQUEST]; @@ -426,6 +495,9 @@ static int dispatch_rw_block_io(blkif_t struct bio *bio = NULL; int ret, i; int operation; + struct timeval cur_time; + + *done_nr_sects = 0; switch (req->operation) { case BLKIF_OP_READ: @@ -582,6 +654,12 @@ static int dispatch_rw_block_io(blkif_t else if (op...
2011 Sep 09
7
[PATCH] xen-blk[front|back] FUA additions.
I am proposing these two patches for 3.2. They allow the backend to process the REQ_FUA request as well. Previous to these patches it only did REQ_FLUSH. There is also a bug-fix for the logic of how barrier/flushes were handled. The patches are based on a branch which also has ''feature-discard'' patches, so they won''t apply nativly on top of 3.1-rc5. Please review and
2012 Feb 24
0
[xen-unstable test] 12043: regressions - FAIL
...eset: 24875:a59c1dcfe968 user: Justin T. Gibbs <justing@spectralogic.com> date: Thu Feb 23 10:03:07 2012 +0000 blkif.h: Define and document the request number/size/segments extension Note: As of __XEN_INTERFACE_VERSION__ 0x00040201 the definition of BLKIF_MAX_SEGMENTS_PER_REQUEST has changed. Drivers must be updated to, at minimum, use BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK, before being recompiled with a __XEN_INTERFACE_VERSION greater than or equal to this value. This extension first appeared in the FreeBSD Operating System. S...