Hello, I''ve recently setup a Linux Dom0 with a 3.0.17 kernel and Xen 4.1.2, and since the 3.x series doesn''t have blktap support I''m using qdisk to attach raw images. I''ve been playing with small images, something like 1GB, and everything seemed fine, speed was not fantastic but it was ok. Today I''ve set up a bigger machine, with a 20GB raw hdd and the disk write throughput is really slow, inferior to 0.5MB/s. I''m trying to install a Debian PV there, and after more than 3 hours it is still installing the base system. I''ve looked at the xenstore backend entries, and everything looks fine: /local/domain/0/backend/qdisk/21/51712/frontend "/local/domain/21/device/vbd/51712" (n0,r21) /local/domain/0/backend/qdisk/21/51712/params "aio:/hdd/vm/servlet/servlet.img" (n0,r21) /local/domain/0/backend/qdisk/21/51712/frontend-id = "21" (n0,r21) /local/domain/0/backend/qdisk/21/51712/online = "1" (n0,r21) /local/domain/0/backend/qdisk/21/51712/removable = "0" (n0,r21) /local/domain/0/backend/qdisk/21/51712/bootable = "1" (n0,r21) /local/domain/0/backend/qdisk/21/51712/state = "4" (n0,r21) /local/domain/0/backend/qdisk/21/51712/dev = "xvda" (n0,r21) /local/domain/0/backend/qdisk/21/51712/type = "tap" (n0,r21) /local/domain/0/backend/qdisk/21/51712/mode = "w" (n0,r21) /local/domain/0/backend/qdisk/21/51712/feature-barrier = "1" (n0,r21) /local/domain/0/backend/qdisk/21/51712/info = "0" (n0,r21) /local/domain/0/backend/qdisk/21/51712/sector-size = "512" (n0,r21) /local/domain/0/backend/qdisk/21/51712/sectors = "40960000" (n0,r21) /local/domain/0/backend/qdisk/21/51712/hotplug-status = "connected" (n0,r21) Also, the related qemu-dm process doesn''t seem to be hung by CPU, in fact it is reporting a CPU usage of 0% almost all the time. I''ve attached to the qemu-dm process with strace, and it is doing lseeks and writes like crazy, is this normal? Is there any improvement when using qemu-upstream? Thanks, Roger.
On Fri, 10 Feb 2012, Roger Pau Monné wrote:> Hello, > > I''ve recently setup a Linux Dom0 with a 3.0.17 kernel and Xen 4.1.2, > and since the 3.x series doesn''t have blktap support I''m using qdisk > to attach raw images. I''ve been playing with small images, something > like 1GB, and everything seemed fine, speed was not fantastic but it > was ok. Today I''ve set up a bigger machine, with a 20GB raw hdd and > the disk write throughput is really slow, inferior to 0.5MB/s. I''m > trying to install a Debian PV there, and after more than 3 hours it is > still installing the base system. > > I''ve looked at the xenstore backend entries, and everything looks fine: > > /local/domain/0/backend/qdisk/21/51712/frontend > "/local/domain/21/device/vbd/51712" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/params > "aio:/hdd/vm/servlet/servlet.img" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/frontend-id = "21" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/online = "1" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/removable = "0" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/bootable = "1" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/state = "4" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/dev = "xvda" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/type = "tap" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/mode = "w" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/feature-barrier = "1" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/info = "0" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/sector-size = "512" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/sectors = "40960000" (n0,r21) > /local/domain/0/backend/qdisk/21/51712/hotplug-status = "connected" (n0,r21) > > Also, the related qemu-dm process doesn''t seem to be hung by CPU, in > fact it is reporting a CPU usage of 0% almost all the time. I''ve > attached to the qemu-dm process with strace, and it is doing lseeks > and writes like crazy, is this normal? Is there any improvement when > using qemu-upstream?Yes, great improvements. The old qemu-xen uses threads to simulate async IO so it is very slow; upstream QEMU uses Linux AIO and is much faster. I wouldn''t expect it to hang completely though, that might be a bug. --8323329-2074795121-1328889196=:7456 Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel --8323329-2074795121-1328889196=:7456--
2012/2/10 Stefano Stabellini <stefano.stabellini@eu.citrix.com>:> On Fri, 10 Feb 2012, Roger Pau Monné wrote: >> Hello, >> >> I've recently setup a Linux Dom0 with a 3.0.17 kernel and Xen 4.1.2, >> and since the 3.x series doesn't have blktap support I'm using qdisk >> to attach raw images. I've been playing with small images, something >> like 1GB, and everything seemed fine, speed was not fantastic but it >> was ok. Today I've set up a bigger machine, with a 20GB raw hdd and >> the disk write throughput is really slow, inferior to 0.5MB/s. I'm >> trying to install a Debian PV there, and after more than 3 hours it is >> still installing the base system. >> >> I've looked at the xenstore backend entries, and everything looks fine: >> >> /local/domain/0/backend/qdisk/21/51712/frontend >> "/local/domain/21/device/vbd/51712" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/params >> "aio:/hdd/vm/servlet/servlet.img" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/frontend-id = "21" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/online = "1" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/removable = "0" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/bootable = "1" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/state = "4" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/dev = "xvda" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/type = "tap" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/mode = "w" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/feature-barrier = "1" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/info = "0" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/sector-size = "512" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/sectors = "40960000" (n0,r21) >> /local/domain/0/backend/qdisk/21/51712/hotplug-status = "connected" (n0,r21) >> >> Also, the related qemu-dm process doesn't seem to be hung by CPU, in >> fact it is reporting a CPU usage of 0% almost all the time. I've >> attached to the qemu-dm process with strace, and it is doing lseeks >> and writes like crazy, is this normal? Is there any improvement when >> using qemu-upstream? > > Yes, great improvements. > The old qemu-xen uses threads to simulate async IO so it is very slow; > upstream QEMU uses Linux AIO and is much faster.That's great news, so qdisk performance in qemu-upstream should be similar to blktap?> I wouldn't expect it to hang completely though, that might be a bug.No, it doesn't hang completely, just very slow. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Mon, 13 Feb 2012, Roger Pau Monné wrote:> > Yes, great improvements. > > The old qemu-xen uses threads to simulate async IO so it is very slow; > > upstream QEMU uses Linux AIO and is much faster. > > That''s great news, so qdisk performance in qemu-upstream should be > similar to blktap?Slightly better actually, from my very quick and dirty tests. --8323329-47964083-1329134383=:7456 Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel --8323329-47964083-1329134383=:7456--
2012/2/13 Stefano Stabellini <stefano.stabellini@eu.citrix.com>:> On Mon, 13 Feb 2012, Roger Pau Monné wrote: >> > Yes, great improvements. >> > The old qemu-xen uses threads to simulate async IO so it is very slow; >> > upstream QEMU uses Linux AIO and is much faster. >> >> That's great news, so qdisk performance in qemu-upstream should be >> similar to blktap? > > Slightly better actually, from my very quick and dirty tests.I'm sure blktap had some context switches (from kernel to userspace) that qdisk doesn't have, since it's a pure userland implementation, so it's plausible for it to be faster. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel