similar to: blkback driver I/O request size in Xen 3.3.0

Displaying 20 results from an estimated 700 matches similar to: "blkback driver I/O request size in Xen 3.3.0"

2012 Aug 16
0
[RFC v1 3/5] VBD: enlarge max segment per request in blkfront
refactoring balback Signed-off-by: Ronghui Duan <ronghui.duan@intel.com<mailto:ronghui.duan@intel.com>> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c index 73f196c..b4767f5 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -64,6 +64,11 @@ MODULE_PARM_DESC(reqs, "Number of blkback requests to
2008 Nov 05
0
[PATCH] blktap: ensure vma->vm_mm''s mmap_sem is being held whenever it is being modified
As usual, written and (build-)tested on 2.6.27.4 and made apply to the 2.6.18 tree without further testing. Signed-off-by: Jan Beulich <jbeulich@novell.com> Index: head-2008-11-04/drivers/xen/blktap/blktap.c =================================================================== --- head-2008-11-04.orig/drivers/xen/blktap/blktap.c 2008-10-01 16:35:04.000000000 +0200 +++
2013 Jun 21
5
[PATCH 3/4] xen-blkback: check the number of iovecs before allocating a bios
With the introduction of indirect segments we can receive requests with a number of segments bigger than the maximum number of allowed iovecs in a bios, so make sure that blkback doesn't try to allocate a bios with more iovecs than BIO_MAX_PAGES Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> ---
2007 Oct 29
1
first_sect & last_sect in blkif_request_segment
Am I right in saying that first_sect & last_sect in blkif_request_segment are the relative sector numbers in the transfer? So if I wanted to transfer 9 sectors starting at 100, the resultant contents of the request would look like: req->sector_number = 100 req->seg[0].first_sect = 0 req->seg[0].last_sect = 3 req->seg[1].first_sect = 4 req->seg[1].last_sect = 7
2012 Sep 19
27
[PATCH] Persistent grant maps for xen blk drivers
This patch implements persistent grants for the xen-blk{front,back} mechanism. The effect of this change is to reduce the number of unmap operations performed, since they cause a (costly) TLB shootdown. This allows the I/O performance to scale better when a large number of VMs are performing I/O. Previously, the blkfront driver was supplied a bvec[] from the request queue. This was granted to
2011 Sep 01
9
[PATCH V4 0/3] xen-blkfront/blkback discard support
Dear list, This is the V4 of the trim support for xen-blkfront/blkback, Now we move BLKIF_OP_TRIM to BLKIF_OP_DISCARD, and dropped all "trim" stuffs in the patches, and use "discard" instead. Also we updated the helpers of blkif_x86_{32|64}_request or we will meet problems using a non-native protocol. And this patch has been tested with both SSD and raw file, with SSD we will
2005 Nov 06
2
Bug in use of grant tables in blkback.c error path?
In dispatch_rw_block_io after a call to HYPERVISOR_grant_table_op, there is the following code which calls fast_flush_area and breaks out of the loop early if one of the handles returned from HYPERVISOR_grant_table_op is negative: for (i = 0; i < nseg; i++) { if (unlikely(map[i].handle < 0)) { DPRINTK("invalid buffer -- could not remap it\n"); fast_flush_area(pending_idx,
2011 Jun 21
13
VM disk I/O limit patch
Hi all, I add a blkback QoS patch. You can config(dynamic/static) different I/O speed for different VM disk by this patch. ---------------------------------------------------------------------------- diff -urNp blkback/blkback.c blkback-qos/blkback.c --- blkback/blkback.c 2011-06-22 07:54:19.000000000 +0800 +++ blkback-qos/blkback.c 2011-06-22 07:53:18.000000000 +0800 @@ -44,6 +44,11 @@
2011 Jun 21
13
VM disk I/O limit patch
Hi all, I add a blkback QoS patch. You can config(dynamic/static) different I/O speed for different VM disk by this patch. ---------------------------------------------------------------------------- diff -urNp blkback/blkback.c blkback-qos/blkback.c --- blkback/blkback.c 2011-06-22 07:54:19.000000000 +0800 +++ blkback-qos/blkback.c 2011-06-22 07:53:18.000000000 +0800 @@ -44,6 +44,11 @@
2012 May 07
14
Little help with blk ring
Hello List, I have a small problem with the ring when transferring blocks the id on the response is different from the request. This is the boot up read, count 0. The guest requests block 0, it has to be located at 7c00. I go ahead and create a REQUEST with this data: ring_req = RING_GET_REQUEST(priv,priv->req_prod_pvt); ring_req->id = 9; ring_req->nr_segments=1; ring_req->operation
2012 Apr 10
7
[PATCH v3 1/2] xen: enter/exit lazy_mmu_mode around m2p_override calls
This patch is a significant performance improvement for the m2p_override: about 6% using the gntdev device. Each m2p_add/remove_override call issues a MULTI_grant_table_op and a __flush_tlb_single if kmap_op != NULL. Batching all the calls together is a great performance benefit because it means issuing one hypercall total rather than two hypercall per page. If paravirt_lazy_mode is set
2012 Aug 16
0
[RFC v1 5/5] VBD: enlarge max segment per request in blkfront
add segring support in blkback Signed-off-by: Ronghui Duan <ronghui.duan@intel.com> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c index 45eda98..0bbc226 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -60,6 +60,10 @@ static int xen_blkif_reqs = 64; module_param_named(reqs, xen_blkif_reqs, int, 0);
2013 Mar 28
1
Xen Remus DRBD dual primary frozen
Dear all, I have sent this problem earlier but maybe its not detail, here I try to write more detail. I hope anybody can help me to point out the problem. First of all I used Ubuntu 12.04 x64 both for domain0 and domainU with modification to run under xen hypervisor and work with remus. I follow and configured the remus with this notes
2010 May 19
0
blkback.131.xvd or blkback.145.xvda?
Hi have 4 dom0 with debian lenny running xen. When I run ps axf|grep xvd I see it''s different on diferent dom0s. dom0-A 15677 ? S< 0:00 \_ [blkback.139.xvd] 15678 ? S< 0:38 \_ [blkback.139.xvd] 17015 ? S< 0:00 \_ [blkback.140.xvd] 17016 ? S< 2:34 \_ [blkback.140.xvd] 21309 ? S< 0:00 \_ [blkback.142.xvd] 21310 ?
2006 Aug 24
1
block ring interface: nr_segments = 0 results in BLKIF_RSP_ERROR
I am currently developing a blkfront.c for a custom OS over Xen 3.0.2-2. Typical I/O is working, however, I ran into an error while testing a corner case. On standard I/O, where { 1 <= nr_segments < BLKIF_MAX_SEGMENTS_PER_REQUEST } blkif_int()''s bret->status returns BLKIF_RSP_OKAY. Yet when { nr_segments == 0 } blkif_int''s bret->status is non-zero. (Yes
2013 Feb 28
1
[PATCH RFC 09/12] xen-blkback: move pending handles list from blkbk to pending_req
Moving grant ref handles from blkbk to pending_req will allow us to get rid of the shared blkbk structure. Signed-off-by: Roger Pau Monné <roger.pau@citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: xen-devel@lists.xen.org --- drivers/block/xen-blkback/blkback.c | 16 ++++------------ 1 files changed, 4 insertions(+), 12 deletions(-) diff --git
2011 May 28
1
ionice and blkback
Hi Everyone, When you want to use ionice to limit the amount of disk a DomU has, due to have to run ionice on every blkback process? Incidently, what is the format of the blkback process? I see the following in ps aux: blkback.xx.xvda blkback.xx.xvda1 blkback.xx.xvd where xx appears to be the domain ID. I''m curious as to the last few letters mean? Thanks
2012 Dec 03
1
xen-blkback: move free persistent grants code
Hello Roger Pau Monne, The patch 4d4f270f1880: "xen-blkback: move free persistent grants code" from Nov 16, 2012, leads to the following warning: drivers/block/xen-blkback/blkback.c:238 free_persistent_gnts() warn: 'persistent_gnt' was already freed. drivers/block/xen-blkback/blkback.c 232 pages[segs_to_unmap] = persistent_gnt->page; 233
2012 Dec 03
1
xen-blkback: move free persistent grants code
Hello Roger Pau Monne, The patch 4d4f270f1880: "xen-blkback: move free persistent grants code" from Nov 16, 2012, leads to the following warning: drivers/block/xen-blkback/blkback.c:238 free_persistent_gnts() warn: 'persistent_gnt' was already freed. drivers/block/xen-blkback/blkback.c 232 pages[segs_to_unmap] = persistent_gnt->page; 233
2011 Sep 09
7
[PATCH] xen-blk[front|back] FUA additions.
I am proposing these two patches for 3.2. They allow the backend to process the REQ_FUA request as well. Previous to these patches it only did REQ_FLUSH. There is also a bug-fix for the logic of how barrier/flushes were handled. The patches are based on a branch which also has ''feature-discard'' patches, so they won''t apply nativly on top of 3.1-rc5. Please review and