similar to: [RFC][PATCH] Use ioemu block drivers through blktap

Displaying 20 results from an estimated 300 matches similar to: "[RFC][PATCH] Use ioemu block drivers through blktap"

2012 Apr 02
23
[PATCH 00 of 18] [v2] tools: fix bugs and build errors triggered by -O2 -Wall -Werror
Changes: tools/blktap: remove unneeded pointer dereferencing in convert_dev_name_to_num tools/blktap: constify string arrays in convert_dev_name_to_num tools/blktap: fix params and physical-device parsing tools/blktap: remove unneeded pointer dereferencing from img2qcow.c tools/blktap: remove unneeded pointer dereferencing from qcow2raw.c tools/blktap2: fix build errors caused by Werror in
2013 Jul 15
21
[PATCH 00 of 21 RESEND] blktap3/drivers: Introduce tapdisk server.
This patch series copies the core of the tapdisk process from blktap2, with updates coming from blktap2.5. Signed-off-by: Thanos Makatos <thanos.makatos@citrix.com>
2016 Jun 01
2
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
Kelly Lesperance wrote: > I did some additional testing - I stopped Kafka on the host, and kicked > off a disk check, and it ran at the expected speed overnight. I started > kafka this morning, and the raid check's speed immediately dropped down to > ~2000K/Sec. > > I then enabled the write-back cache on the drives (hdparm -W1 /dev/sd*). > The raid check is now running
2013 Jul 15
6
[PATCH 0 of 6 RESEND v2] blktap3/sring: shared ring between tapdisk and the front-end
This patch series introduces the shared ring used by the front-end to pass request descriptors to tapdisk, as well as responses from tapdisk to the front-end. Requests from this ring end up in tapdisk''s standard request queue. When the tapback daemon detects that the front-end tries to connect to the back-end, it spawns a tapdisk and tells it to connect to the shared ring. The shared
2016 Jun 13
1
Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
On 2016-06-01 20:07, Kelly Lesperance wrote: > Software RAID 10. Servers are HP DL380 Gen 8s, with 12x4 TB 7200 RPM drives. > > On 2016-06-01, 3:52 PM, "centos-bounces at centos.org on behalf of m.roth at 5-cent.us" <centos-bounces at centos.org on behalf of m.roth at 5-cent.us> wrote: > > >Kelly Lesperance wrote: > >> I did some additional testing - I
2011 Jun 21
13
VM disk I/O limit patch
Hi all, I add a blkback QoS patch. You can config(dynamic/static) different I/O speed for different VM disk by this patch. ---------------------------------------------------------------------------- diff -urNp blkback/blkback.c blkback-qos/blkback.c --- blkback/blkback.c 2011-06-22 07:54:19.000000000 +0800 +++ blkback-qos/blkback.c 2011-06-22 07:53:18.000000000 +0800 @@ -44,6 +44,11 @@
2011 Jun 21
13
VM disk I/O limit patch
Hi all, I add a blkback QoS patch. You can config(dynamic/static) different I/O speed for different VM disk by this patch. ---------------------------------------------------------------------------- diff -urNp blkback/blkback.c blkback-qos/blkback.c --- blkback/blkback.c 2011-06-22 07:54:19.000000000 +0800 +++ blkback-qos/blkback.c 2011-06-22 07:53:18.000000000 +0800 @@ -44,6 +44,11 @@
2005 Jun 07
3
Error while creating domains
I am trying to start a large number of SMP domains (> 50). However, I am unable to create more than 7 domains. When I try creating the 8th domain, I get this error: Using config file "myconf7". VIRTUAL MEMORY ARRANGEMENT: Loaded kernel: 0xc0100000->0xc0344c24 Init. ramdisk: 0xc0345000->0xc0345000 Phys-Mach map: 0xc0345000->0xc0347800 Page tables:
2010 Sep 15
15
xenpaging fixes for kernel and hypervisor
Patrick, there following patches fix xenpaging for me. Granttable handling is incomplete. If a page is gone, a GNTST_eagain should be returned to the caller to inidcate the hypercall has to be retried after a while, until the page is available again. Please review. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com
2011 May 02
32
[PATCH] blkback: Fix block I/O latency issue
In blkback driver, after I/O requests are submitted to Dom-0 block I/O subsystem, blkback goes to ''sleep'' effectively without letting blkfront know about it (req_event isn''t set appropriately). Hence blkfront doesn''t notify blkback when it submits a new I/O thus delaying the ''dispatch'' of the new I/O to Dom-0 block I/O subsystem. The new I/O is
2012 Sep 19
27
[PATCH] Persistent grant maps for xen blk drivers
This patch implements persistent grants for the xen-blk{front,back} mechanism. The effect of this change is to reduce the number of unmap operations performed, since they cause a (costly) TLB shootdown. This allows the I/O performance to scale better when a large number of VMs are performing I/O. Previously, the blkfront driver was supplied a bvec[] from the request queue. This was granted to
2011 Sep 01
9
[PATCH V4 0/3] xen-blkfront/blkback discard support
Dear list, This is the V4 of the trim support for xen-blkfront/blkback, Now we move BLKIF_OP_TRIM to BLKIF_OP_DISCARD, and dropped all "trim" stuffs in the patches, and use "discard" instead. Also we updated the helpers of blkif_x86_{32|64}_request or we will meet problems using a non-native protocol. And this patch has been tested with both SSD and raw file, with SSD we will
2006 Dec 05
5
ioctl 0000126c not supported by XL blkif
I am using the srpm from http://xenbits.xensource.com/kernels/rhel3x/kernel-2.4.21-47.0.1.EL.xs0.3.5. 15.src.rpm (I get the same issue using the binary RPM) the dom0 is running 3.0.3_0 Upon booting the DomU, (the DomU has been passed phy:/dev/sda6, which has been partitioned using qemu) I get the following ioctl errors. ioctl 0000126c not supported by XL blkif ioctl 0000126c not supported
2007 Dec 06
6
DomU (Centos 5) with dedicated e1000 (intel) device dropping packets
Hello everybody, I''ve finished with pci export from DomU to Dom0 (Debian Etch) but now i have a new problem, and a big one. My ethernet card is dropping packets but after some time (i can''t tell how) It can work for a day (not in production so not hard tested) and then all packets are dropped. Look at the ifconfig output : eth0      Link encap:Ethernet  HWaddr
2012 Jul 11
12
99% iowait on one core in 8 core processor
Hi All, We have a xen server and using 8 core processor. I can see that there is 99% iowait on only core 0. 02:28:49 AM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s 02:28:54 AM all 0.00 0.00 0.00 12.65 0.00 0.02 2.24 85.08 1359.88 02:28:54 AM 0 0.00 0.00 0.00 96.21 0.00 0.20 3.19 0.40 847.11 02:28:54 AM
2011 Oct 08
9
xentop reporting zero written sectors
Just moving a chunk of files from one filesysstem on xvba to another on xvdb, and was monitoring with xentop as it was taking longer than expected. The VBD_RD and VBD_WR counters were both clocking-up as expected, as was the VBD_RSECT counter, but the VBD_WSECT counter was stuck on zero, I toggled on the individual VBD device counters and these showed the same (with the RD and WR counters
2011 Oct 08
9
xentop reporting zero written sectors
Just moving a chunk of files from one filesysstem on xvba to another on xvdb, and was monitoring with xentop as it was taking longer than expected. The VBD_RD and VBD_WR counters were both clocking-up as expected, as was the VBD_RSECT counter, but the VBD_WSECT counter was stuck on zero, I toggled on the individual VBD device counters and these showed the same (with the RD and WR counters
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
From: Santosh Jodh <santosh.jodh at citrix.com> Add support for multi page ring for block devices. The number of pages is configurable for blkback via module parameter. blkback reports max-ring-page-order to blkfront via xenstore. blkfront reports its supported ring-page-order to blkback via xenstore. blkfront reports multi page ring references via ring-refNN in xenstore. The change allows
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
From: Santosh Jodh <santosh.jodh at citrix.com> Add support for multi page ring for block devices. The number of pages is configurable for blkback via module parameter. blkback reports max-ring-page-order to blkfront via xenstore. blkfront reports its supported ring-page-order to blkback via xenstore. blkfront reports multi page ring references via ring-refNN in xenstore. The change allows
2012 Mar 05
11
[PATCH 0001/001] xen: multi page ring support for block devices
From: Santosh Jodh <santosh.jodh at citrix.com> Add support for multi page ring for block devices. The number of pages is configurable for blkback via module parameter. blkback reports max-ring-page-order to blkfront via xenstore. blkfront reports its supported ring-page-order to blkback via xenstore. blkfront reports multi page ring references via ring-refNN in xenstore. The change allows