I can''t get tapdisk to run after upgrading to 3.11 (wheezy-backports). strace tells me it is failing on the call to io_setup with EINVAL. The man page for io_setup tells me that ctx_idp parameter should be initialised to 0, and EINVAL means ctx_idp is not initialised. Looking at the code for tapdisk, it appears to initialise ctx_idp to 1, with a comment: /* * We used a kernel patch to return an fd associated with the AIO context * so that we can concurrently poll on synchronous and async descriptors. * This is signalled by passing 1 as the io context to io_setup. */ #define REQUEST_ASYNC_FD ((io_context_t)1) And if I create a basic test program, I can see that calling it with param 1 it does return EINVAL. What is this patch the comment refers to? It was working under Debian 3.2.x Thanks James
I advise you to try qdisk instead of blktap, the performance increasing is very big. With systems such as windows that uses the disk even when it could use a lot of ram still free, the difference is clearly visible. Here one screenshot of benkmark when I switch from blktap2 of debian package to qdisk of upstream qemu some months ago: http://lists.xen.org/archives/html/xen-devel/2013-07/jpgV4ajjhOQ11.jpg I did the benkmark on windows 7 with your gplpv. 2013/12/7 James Harper <james.harper@bendigoit.com.au>> I can''t get tapdisk to run after upgrading to 3.11 (wheezy-backports). > strace tells me it is failing on the call to io_setup with EINVAL. > > The man page for io_setup tells me that ctx_idp parameter should be > initialised to 0, and EINVAL means ctx_idp is not initialised. Looking at > the code for tapdisk, it appears to initialise ctx_idp to 1, with a comment: > > /* > * We used a kernel patch to return an fd associated with the AIO context > * so that we can concurrently poll on synchronous and async descriptors. > * This is signalled by passing 1 as the io context to io_setup. > */ > #define REQUEST_ASYNC_FD ((io_context_t)1) > > And if I create a basic test program, I can see that calling it with param > 1 it does return EINVAL. > > What is this patch the comment refers to? It was working under Debian 3.2.x > > Thanks > > James > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
> > I advise you to try qdisk instead of blktap, the performance increasing is very > big. > With systems such as windows that uses the disk even when it could use a lot > of ram still free, the difference is clearly visible. > > Here one screenshot of benkmark when I switch from blktap2 of debian > package to qdisk of upstream qemu some months ago: > http://lists.xen.org/archives/html/xen-devel/2013-07/jpgV4ajjhOQ11.jpg > > I did the benkmark on windows 7 with your gplpv. >I''m actually using ceph as the backend, and also using it on PV DomU''s. Is qdisk only for HVM? James
> > I can''t get tapdisk to run after upgrading to 3.11 (wheezy-backports). strace > tells me it is failing on the call to io_setup with EINVAL. > > The man page for io_setup tells me that ctx_idp parameter should be > initialised to 0, and EINVAL means ctx_idp is not initialised. Looking at the > code for tapdisk, it appears to initialise ctx_idp to 1, with a comment: > > /* > * We used a kernel patch to return an fd associated with the AIO context > * so that we can concurrently poll on synchronous and async descriptors. > * This is signalled by passing 1 as the io context to io_setup. > */ > #define REQUEST_ASYNC_FD ((io_context_t)1) > > And if I create a basic test program, I can see that calling it with param 1 it > does return EINVAL. > > What is this patch the comment refers to? It was working under Debian 3.2.x >I got it working by initialising ctx_idp to 0. I believe it uses eventfd now anyway and the patch is no longer required. James
2013/12/7 James Harper <james.harper@bendigoit.com.au>> > > > I advise you to try qdisk instead of blktap, the performance increasing > is very > > big. > > With systems such as windows that uses the disk even when it could use a > lot > > of ram still free, the difference is clearly visible. > > > > Here one screenshot of benkmark when I switch from blktap2 of debian > > package to qdisk of upstream qemu some months ago: > > http://lists.xen.org/archives/html/xen-devel/2013-07/jpgV4ajjhOQ11.jpg > > > > I did the benkmark on windows 7 with your gplpv. > > > > I''m actually using ceph as the backend, and also using it on PV DomU''s. Is > qdisk only for HVM? > > James >Is also for pv domUs, I use qdisk for both pv and hvm for months and no problem found, only performance increased. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
> > > I''m actually using ceph as the backend, and also using it on PV > > DomU''s. Is qdisk only for HVM? > > Is also for pv domUs, I use qdisk for both pv and hvm for months and no > problem found, only performance increased. >Any points on where I might get started using qdisk? I imagine that porting ceph rbd to qdisk might not be particularly hard, if it hasn''t been done already James
2013/12/7 James Harper <james.harper@bendigoit.com.au>> > > > > I''m actually using ceph as the backend, and also using it on PV > > > DomU''s. Is qdisk only for HVM? > > > > Is also for pv domUs, I use qdisk for both pv and hvm for months and no > > problem found, only performance increased. > > > > Any points on where I might get started using qdisk? I imagine that > porting ceph rbd to qdisk might not be particularly hard, if it hasn''t been > done already > > James >Seems that upstream qemu support ceph since version 0.14: http://wiki.qemu.org/ChangeLog/0.14#ceph.2Frbd I never tried ceph, so I don''t say anything about it. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
> > I got it working by initialising ctx_idp to 0. I believe it uses eventfd now > anyway and the patch is no longer required. >Okay that was stupid. The result was that tapdisk spins at 100% CPU. The problem is that debian uname is ''3.11-0.bpo.2-amd64'' and tapdisk detects kernel version for eventfd by looking for ''x.y.z'', which fails. This works for me: diff --git a/drivers/tapdisk-utils.c b/drivers/tapdisk-utils.c index 4c45c83..7825e13 100644 --- a/drivers/tapdisk-utils.c +++ b/drivers/tapdisk-utils.c @@ -256,8 +256,12 @@ int tapdisk_linux_version(void) return -errno; n = sscanf(uts.release, "%u.%u.%u", &version, &patchlevel, &sublevel); - if (n != 3) - return -ENOSYS; + if (n != 3) { + sublevel = 0; + n = sscanf(uts.release, "%u.%u", &version, &patchlevel); + if (n != 2) + return -ENOSYS; + } return KERNEL_VERSION(version, patchlevel, sublevel); } James
On Sat, 7 Dec 2013, James Harper wrote:> > > > > I''m actually using ceph as the backend, and also using it on PV > > > DomU''s. Is qdisk only for HVM? > > > > Is also for pv domUs, I use qdisk for both pv and hvm for months and no > > problem found, only performance increased.I would definitely recommend qdisk over tapdisk: you can simply use upstream QEMU for development and works. Enabling Ceph should just be a matter of passing the right command line arguments to QEMU.> Any points on where I might get started using qdisk? I imagine that porting ceph rbd to qdisk might not be particularly hard, if it hasn''t been done alreadyYou simply need to specify qdisk as disk backend, for example: /path/image,qcow2,hda,rw,backendtype=qdisk actually if you use qcow2 as image format you don''t even need to specify backendtype=qdisk because it should default to it anyway.
On Sun, 2013-12-08 at 15:07 +0000, Stefano Stabellini wrote:> On Sat, 7 Dec 2013, James Harper wrote: > > > > > > > I''m actually using ceph as the backend, and also using it on PV > > > > DomU''s. Is qdisk only for HVM? > > > > > > Is also for pv domUs, I use qdisk for both pv and hvm for months and no > > > problem found, only performance increased. > > I would definitely recommend qdisk over tapdisk: you can simply use > upstream QEMU for development and works. Enabling Ceph should just be a > matter of passing the right command line arguments to QEMU. > > > > Any points on where I might get started using qdisk? I imagine that porting ceph rbd to qdisk might not be particularly hard, if it hasn''t been done already > > You simply need to specify qdisk as disk backend, for example: > > /path/image,qcow2,hda,rw,backendtype=qdisk > > actually if you use qcow2 as image format you don''t even need to specify > backendtype=qdisk because it should default to it anyway.*if* You are using xl. xend doesn''t support upstream qemu at all. Not sure when/if it would use the qemu-trad qdisk. Ian.