similar to: first_sect & last_sect in blkif_request_segment

Displaying 20 results from an estimated 3000 matches similar to: "first_sect & last_sect in blkif_request_segment"

2012 May 07
14
Little help with blk ring
Hello List, I have a small problem with the ring when transferring blocks the id on the response is different from the request. This is the boot up read, count 0. The guest requests block 0, it has to be located at 7c00. I go ahead and create a REQUEST with this data: ring_req = RING_GET_REQUEST(priv,priv->req_prod_pvt); ring_req->id = 9; ring_req->nr_segments=1; ring_req->operation
2011 Sep 01
9
[PATCH V4 0/3] xen-blkfront/blkback discard support
Dear list, This is the V4 of the trim support for xen-blkfront/blkback, Now we move BLKIF_OP_TRIM to BLKIF_OP_DISCARD, and dropped all "trim" stuffs in the patches, and use "discard" instead. Also we updated the helpers of blkif_x86_{32|64}_request or we will meet problems using a non-native protocol. And this patch has been tested with both SSD and raw file, with SSD we will
2012 Aug 16
0
[RFC v1 3/5] VBD: enlarge max segment per request in blkfront
refactoring balback Signed-off-by: Ronghui Duan <ronghui.duan@intel.com<mailto:ronghui.duan@intel.com>> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c index 73f196c..b4767f5 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -64,6 +64,11 @@ MODULE_PARM_DESC(reqs, "Number of blkback requests to
2009 Apr 15
0
blkback driver I/O request size in Xen 3.3.0
Hi,all In the vbd blkback driver(linux/drivers/xen/blkback/blkback.c), when function dispatch_rw_block_io() try to do the real I/O job, it will do a sanity check on I/O request sent from DomU in the following code fragment: ... 430 for (i = 0; i < nseg; i++) { 431 uint32_t flags; 432 433 seg[i].nsec = req->seg[i].last_sect -
2012 Feb 25
9
[xen-unstable bisection] complete test-amd64-i386-rhel6hvm-amd
branch xen-unstable xen branch xen-unstable job test-amd64-i386-rhel6hvm-amd test redhat-install Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git Tree: qemu git://xenbits.xen.org/staging/qemu-xen-unstable.git Tree: qemuu git://xenbits.xen.org/staging/qemu-upstream-unstable.git Tree: xen http://xenbits.xen.org/staging/xen-unstable.hg *** Found and reproduced problem
2012 Aug 16
0
[RFC v1 5/5] VBD: enlarge max segment per request in blkfront
add segring support in blkback Signed-off-by: Ronghui Duan <ronghui.duan@intel.com> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c index 45eda98..0bbc226 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -60,6 +60,10 @@ static int xen_blkif_reqs = 64; module_param_named(reqs, xen_blkif_reqs, int, 0);
2013 May 13
22
[PATCH] xen-blk(front|back): Handle large physical sector disks
I accidentally realized today that any domU''s using the paravirt disk driver potentially suffer from poor performance when they get handed in a physical volume and partitioning is done inside the guest. The physical volume passed in has to be one that has the compat 512 logical sector size but hints its real sector size (eg. 4096) as physical sector size. In dom0 handling is correct and
2013 Jul 15
6
[PATCH 0 of 6 RESEND v2] blktap3/sring: shared ring between tapdisk and the front-end
This patch series introduces the shared ring used by the front-end to pass request descriptors to tapdisk, as well as responses from tapdisk to the front-end. Requests from this ring end up in tapdisk''s standard request queue. When the tapback daemon detects that the front-end tries to connect to the back-end, it spawns a tapdisk and tells it to connect to the shared ring. The shared
2008 Mar 10
12
[RFC][PATCH] Use ioemu block drivers through blktap
When I submitted the qcow2 patch for blktap, suggestions came up that the qemu block drivers should be used also for blktap to eliminate the current code duplication in ioemu and blktap. The attached patch adds support for a tap:ioemu pseudo driver. Devices using this driver won''t use tapdisk (containing the code duplication) any more, but will connect to the qemu-dm of the domain. In
2012 Sep 19
27
[PATCH] Persistent grant maps for xen blk drivers
This patch implements persistent grants for the xen-blk{front,back} mechanism. The effect of this change is to reduce the number of unmap operations performed, since they cause a (costly) TLB shootdown. This allows the I/O performance to scale better when a large number of VMs are performing I/O. Previously, the blkfront driver was supplied a bvec[] from the request queue. This was granted to
2014 May 22
2
Bug#748953: blktap-dkms: Struct bio was changed in 3.14 breaking build
Package: blktap-dkms Version: 2.0.93-0.2 Severity: serious Justification: fails to build from source (but built successfully in the past) The build fails on 3.14 kernel with the following error: /var/lib/dkms/blktap/2.0.93/build/ring.c: In function ?blktap_ring_make_tr_request?: /var/lib/dkms/blktap/2.0.93/build/ring.c:314:32: error: ?struct bio? has no member named ?bi_sector?
2011 Aug 31
4
[PATCH 0 of 1] Patch to alter BLKIF_OP_TRIM to BLKIF_OP_DISCARD (v1).
Hey guys, Pasi mentioned on Li''s (and Owen''s) patches which provide TRIM/UNMAP support to the Linux backend/frontend that: " Isn''t the generic name for this functionality "discard" in Linux? and "trim" being the ATA specific discard-implementation, and "scsi unmap" the SAS/SCSI specific discard-implementation? Just
2013 Jul 15
21
[PATCH 00 of 21 RESEND] blktap3/drivers: Introduce tapdisk server.
This patch series copies the core of the tapdisk process from blktap2, with updates coming from blktap2.5. Signed-off-by: Thanos Makatos <thanos.makatos@citrix.com>
2008 Dec 26
17
Multiple IRQ''s in HVM for Windows
I really need to have the ability to tie event channel port''s to interrupts for my gplpv drivers under Windows. Is anyone working on anything like this? Does MSI allow more than one interrupt per PCI device? Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2009 Nov 02
4
vps file lost after server crash
Hello, All, Today one of our servers was crashed, after I rebooted the server, one vps can not boot up, and it needs fsck, after I run fsck command, a lot of files lost, and also some file content was changed. Not sure how could it happen, do you have any advice that can avoid this or recover the data? Thanks. _______________________________________________ Xen-users mailing list
2008 Aug 18
5
HVM windows - PCI IRQ firing on both CPU''s
I''m just doing some testing on the gplpv drivers with different ways of handling interrupts, and I''m trying a scheme where each xen device (eg vbd/vif) driver attaches to the same IRQ as the pci driver, and each handles it in sequence. In testing though, I noticed the following when logging what each ISR is doing: 60.32381439 - evtchn event on port 5 60.32384109 - port 5
2007 Nov 25
4
behaviour of ''xm block-attach'' changed with 3.1.2?
''xm block-attach'' doesn''t let me assign the same backend device anymore, even if I use ''r!'' or ''w!''. I think this has changed since 3.1.1 (or maybe since 3.1.0). Is this the way it is supposed to work? Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com
2009 Feb 10
7
hang on restore in 3.3.1
I am having problems with save/restore under 3.3.1 in the GPLPV drivers. I call hvm_shutdown(xpdd, SHUTDOWN_suspend), but as soon as I lower IRQL (enabling interrupts), qemu goes to 100% CPU and the DomU load goes right up too. Xentrace is showing a whole lot of this going on: CPU0 200130258143212 (+ 770) hypercall [ rip = 0x000000008020632a, eax = 0xffffffff ] CPU0 200130258151107 (+
2010 Jan 30
20
"Iomem mapping not permitted" during windows crash dump under GPLPV
I''ve recently noticed that my windows crash dumps fail at around 40-50% under GPLPV. ''xm dmesg'' shows the following: (XEN) grant_table.c:350:d0 Iomem mapping not permitted ffffffffffffffff (domain 865) At first I thought that the cause was just a bug in my grant ref code but it just occurred to me that this could be happening when Windows tries to write out the
2010 Jun 08
32
Problems with GPLPV network latency
Hi, DomU is a Win2008 R2 64 When i install the GPLPV drivers the network latency goes from 15ms to random numbers up to 1200ms and eventually dies. If you run a ping from the DomU to another host the network stays alive but the high latency is still there. Further more if i try and uninstall the network driver i am unable to use the old one (realtek) as it cannot detect the device.