search for: end_block_io_op

Displaying 6 results from an estimated 6 matches for "end_block_io_op".

2007 Dec 06
6
DomU (Centos 5) with dedicated e1000 (intel) device dropping packets
Hello everybody, I''ve finished with pci export from DomU to Dom0 (Debian Etch) but now i have a new problem, and a big one. My ethernet card is dropping packets but after some time (i can''t tell how) It can work for a day (not in production so not hard tested) and then all packets are dropped. Look at the ifconfig output : eth0      Link encap:Ethernet  HWaddr
2012 Aug 16
0
[RFC v1 3/5] VBD: enlarge max segment per request in blkfront
...uct xen_blkif *blkif, } else if (err) status = BLKIF_RSP_ERROR; - make_response(blkif, req->u.discard.id, req->operation, status); + make_response(blkif, req->id, BLKIF_OP_DISCARD, 0, status); xen_blkif_put(blkif); return err; } @@ -470,7 +500,8 @@ static void __end_block_io_op(struct pending_req *pending_req, int error) if (atomic_dec_and_test(&pending_req->pendcnt)) { xen_blkbk_unmap(pending_req); make_response(pending_req->blkif, pending_req->id, - pending_req->operation, pending_req->status); +...
2011 May 02
32
[PATCH] blkback: Fix block I/O latency issue
In blkback driver, after I/O requests are submitted to Dom-0 block I/O subsystem, blkback goes to ''sleep'' effectively without letting blkfront know about it (req_event isn''t set appropriately). Hence blkfront doesn''t notify blkback when it submits a new I/O thus delaying the ''dispatch'' of the new I/O to Dom-0 block I/O subsystem. The new I/O is
2011 Sep 01
9
[PATCH V4 0/3] xen-blkfront/blkback discard support
Dear list, This is the V4 of the trim support for xen-blkfront/blkback, Now we move BLKIF_OP_TRIM to BLKIF_OP_DISCARD, and dropped all "trim" stuffs in the patches, and use "discard" instead. Also we updated the helpers of blkif_x86_{32|64}_request or we will meet problems using a non-native protocol. And this patch has been tested with both SSD and raw file, with SSD we will
2012 Apr 10
7
[PATCH v3 1/2] xen: enter/exit lazy_mmu_mode around m2p_override calls
This patch is a significant performance improvement for the m2p_override: about 6% using the gntdev device. Each m2p_add/remove_override call issues a MULTI_grant_table_op and a __flush_tlb_single if kmap_op != NULL. Batching all the calls together is a great performance benefit because it means issuing one hypercall total rather than two hypercall per page. If paravirt_lazy_mode is set
2011 Nov 17
29
[PATCH 00 of 17] Documentation updates
The following series flushes my documentation queue and replaces previous postings of those patches. The main difference is that the xl cfg file is now formatted using POD instead of markdown and presented as a manpage. I have setup a cron job to build docs/html and publish it at http://xenbits.xen.org/docs/unstable/ (it''s a bit bare right now). The motivation for some of these patches