search for: first_sect

Displaying 20 results from an estimated 42 matches for "first_sect".

2007 Oct 29
1
first_sect & last_sect in blkif_request_segment
Am I right in saying that first_sect & last_sect in blkif_request_segment are the relative sector numbers in the transfer? So if I wanted to transfer 9 sectors starting at 100, the resultant contents of the request would look like: req->sector_number = 100 req->seg[0].first_sect = 0 req->seg[0].last_sect = 3 req->seg...
2012 Aug 16
0
[RFC v1 3/5] VBD: enlarge max segment per request in blkfront
...w.seg[i].gref, + seg_req[i].gref, pending_req->blkif->domid); } @@ -387,14 +416,15 @@ static int xen_blkbk_map(struct blkif_request *req, continue; seg[i].buf = map[i].dev_bus_addr | - (req->u.rw.seg[i].first_sect << 9); + (seg_req[i].first_sect << 9); } return ret; } -static int dispatch_discard_io(struct xen_blkif *blkif, - struct blkif_request *req) +static int dispatch_discard_io(struct xen_blkif *blkif) { + struct blkif_request *blkif_req = (struct...
2009 Apr 15
0
blkback driver I/O request size in Xen 3.3.0
...will do a sanity check on I/O request sent from DomU in the following code fragment: ... 430 for (i = 0; i < nseg; i++) { 431 uint32_t flags; 432 433 seg[i].nsec = req->seg[i].last_sect - 434 req->seg[i].first_sect + 1; 435 436 if ((req->seg[i].last_sect >= (PAGE_SIZE >> 9)) || 437 (req->seg[i].last_sect < req->seg[i].first_sect)) 438 goto fail_response; ... L436 check whether the number of sectors in a segment...
2011 Sep 01
9
[PATCH V4 0/3] xen-blkfront/blkback discard support
Dear list, This is the V4 of the trim support for xen-blkfront/blkback, Now we move BLKIF_OP_TRIM to BLKIF_OP_DISCARD, and dropped all "trim" stuffs in the patches, and use "discard" instead. Also we updated the helpers of blkif_x86_{32|64}_request or we will meet problems using a non-native protocol. And this patch has been tested with both SSD and raw file, with SSD we will
2012 May 07
14
Little help with blk ring
..._GET_REQUEST(priv,priv->req_prod_pvt); ring_req->id = 9; ring_req->nr_segments=1; ring_req->operation = BLKIF_OP_READ; ring_req->sector_number = (int)op->lba; //sector to be read ring_req->seg[0].gref = (bi->buffer_gref); //this should be get_free_gref(); ring_req->seg[0].first_sect = 0;//op->lba; ring_req->seg[0].last_sect = 7;//op->lba + op->count; RING_PUSH_REQUESTS_AND_CHECK_NOTIFY((priv),notify); //return notify=0 if(notify){ dprintf(1,"Start notify procedure\n"); evtchn_send_t send; send.port = (bi->port); dprintf(1,"In notify befo...
2012 Sep 19
27
[PATCH] Persistent grant maps for xen blk drivers
.../ + pending_req->blkif->pers_gnts + [pending_req->blkif->pers_gnt_c - segs_to_init + + i]->handle = map[i].handle; + new_pers_gnts[i]->dev_bus_addr = map[i].dev_bus_addr; + } if (ret) continue; - - seg[i].buf = map[i].dev_bus_addr | - (req->u.rw.seg[i].first_sect << 9); + } + for (i = 0; i < nseg; i++) { + if (use_pers_gnts) { + pending_handle(pending_req, i) = pers_gnts[i]->handle; + seg[i].buf = pers_gnts[i]->dev_bus_addr | + (req->u.rw.seg[i].first_sect << 9); + } else { + pending_handle(pending_req, i) = map[i].handle;...
2012 Apr 10
7
[PATCH v3 1/2] xen: enter/exit lazy_mmu_mode around m2p_override calls
...lazy_mmu_mode(); for (i = 0; i < nseg; i++) { if (unlikely(map[i].status != 0)) { pr_debug(DRV_PFX "invalid buffer -- could not remap it\n"); @@ -410,6 +411,7 @@ static int xen_blkbk_map(struct blkif_request *req, seg[i].buf = map[i].dev_bus_addr | (req->u.rw.seg[i].first_sect << 9); } + arch_leave_lazy_mmu_mode(); return ret; } diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c index b4d4eac..c7dc2d6 100644 --- a/drivers/xen/grant-table.c +++ b/drivers/xen/grant-table.c @@ -751,6 +751,8 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map...
2008 Mar 10
12
[RFC][PATCH] Use ioemu block drivers through blktap
When I submitted the qcow2 patch for blktap, suggestions came up that the qemu block drivers should be used also for blktap to eliminate the current code duplication in ioemu and blktap. The attached patch adds support for a tap:ioemu pseudo driver. Devices using this driver won''t use tapdisk (containing the code duplication) any more, but will connect to the qemu-dm of the domain. In
2013 Jul 15
6
[PATCH 0 of 6 RESEND v2] blktap3/sring: shared ring between tapdisk and the front-end
This patch series introduces the shared ring used by the front-end to pass request descriptors to tapdisk, as well as responses from tapdisk to the front-end. Requests from this ring end up in tapdisk''s standard request queue. When the tapback daemon detects that the front-end tries to connect to the back-end, it spawns a tapdisk and tells it to connect to the shared ring. The shared
2013 May 13
22
[PATCH] xen-blk(front|back): Handle large physical sector disks
I accidentally realized today that any domU''s using the paravirt disk driver potentially suffer from poor performance when they get handed in a physical volume and partitioning is done inside the guest. The physical volume passed in has to be one that has the compat 512 logical sector size but hints its real sector size (eg. 4096) as physical sector size. In dom0 handling is correct and
2010 Sep 15
15
xenpaging fixes for kernel and hypervisor
Patrick, there following patches fix xenpaging for me. Granttable handling is incomplete. If a page is gone, a GNTST_eagain should be returned to the caller to inidcate the hypercall has to be retried after a while, until the page is available again. Please review. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com
2013 Jul 15
21
[PATCH 00 of 21 RESEND] blktap3/drivers: Introduce tapdisk server.
This patch series copies the core of the tapdisk process from blktap2, with updates coming from blktap2.5. Signed-off-by: Thanos Makatos <thanos.makatos@citrix.com>
2007 Apr 18
20
[patch 00/20] XEN-paravirt: Xen guest implementation for paravirt_ops interface
This patch series implements the Linux Xen guest in terms of the paravirt-ops interface. The features in implemented this patch series are: * domU only * UP only (most code is SMP-safe, but there's no way to create a new vcpu) * writable pagetables, with late pinning/early unpinning (no shadow pagetable support) * supports both PAE and non-PAE modes * xen console * virtual block
2007 Apr 18
20
[patch 00/20] XEN-paravirt: Xen guest implementation for paravirt_ops interface
This patch series implements the Linux Xen guest in terms of the paravirt-ops interface. The features in implemented this patch series are: * domU only * UP only (most code is SMP-safe, but there's no way to create a new vcpu) * writable pagetables, with late pinning/early unpinning (no shadow pagetable support) * supports both PAE and non-PAE modes * xen console * virtual block
2007 Apr 18
20
[patch 00/20] XEN-paravirt: Xen guest implementation for paravirt_ops interface
This patch series implements the Linux Xen guest in terms of the paravirt-ops interface. The features in implemented this patch series are: * domU only * UP only (most code is SMP-safe, but there's no way to create a new vcpu) * writable pagetables, with late pinning/early unpinning (no shadow pagetable support) * supports both PAE and non-PAE modes * xen console * virtual block
2007 Apr 18
24
[patch 00/24] Xen-paravirt_ops: Xen guest implementation for paravirt_ops interface
Hi Andi, This patch series implements the Linux Xen guest as a paravirt_ops backend. The features in implemented this patch series are: * domU only * UP only (most code is SMP-safe, but there's no way to create a new vcpu) * writable pagetables, with late pinning/early unpinning (no shadow pagetable support) * supports both PAE and non-PAE modes * xen hvc console (console=hvc0) *
2007 Apr 18
24
[patch 00/24] Xen-paravirt_ops: Xen guest implementation for paravirt_ops interface
Hi Andi, This patch series implements the Linux Xen guest as a paravirt_ops backend. The features in implemented this patch series are: * domU only * UP only (most code is SMP-safe, but there's no way to create a new vcpu) * writable pagetables, with late pinning/early unpinning (no shadow pagetable support) * supports both PAE and non-PAE modes * xen hvc console (console=hvc0) *
2007 Apr 18
24
[patch 00/24] Xen-paravirt_ops: Xen guest implementation for paravirt_ops interface
Hi Andi, This patch series implements the Linux Xen guest as a paravirt_ops backend. The features in implemented this patch series are: * domU only * UP only (most code is SMP-safe, but there's no way to create a new vcpu) * writable pagetables, with late pinning/early unpinning (no shadow pagetable support) * supports both PAE and non-PAE modes * xen hvc console (console=hvc0) *
2007 Apr 18
43
[RFC PATCH 00/35] Xen i386 paravirtualization support
Unlike full virtualization in which the virtual machine provides the same platform interface as running natively on the hardware, paravirtualization requires modification to the guest operating system to work with the platform interface provided by the hypervisor. Xen was designed with performance in mind. Calls to the hypervisor are minimized, batched if necessary, and non-critical codepaths
2007 Apr 18
43
[RFC PATCH 00/35] Xen i386 paravirtualization support
Unlike full virtualization in which the virtual machine provides the same platform interface as running natively on the hardware, paravirtualization requires modification to the guest operating system to work with the platform interface provided by the hypervisor. Xen was designed with performance in mind. Calls to the hypervisor are minimized, batched if necessary, and non-critical codepaths