Hi developers, I''m currently running a PVHVM freeNAS domU to serve storage (SATA controller passed-through) to all other domUs. However, some issues are observed in the network / NFS performance for the NAS domU. I noticed that there is a blkback driver in freebsd which is documented as capable to export disk to other domains, which looks very promising. However, the wiki says that disk driver domain is not supported (at least for now). I wonder if there is any plan to support such disk driver domain? It''ll be a great feature for my use case. Thanks, Timothy _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
On 04/04/13 12:26, G.R. wrote:> Hi developers, > I''m currently running a PVHVM freeNAS domU to serve storage (SATA > controller passed-through) to all other domUs. However, some issues are > observed in the network / NFS performance for the NAS domU. > I noticed that there is a blkback driver in freebsd which is documented > as capable to export disk to other domains, which looks very promising. > However, the wiki says that disk driver domain is not supported (at > least for now). > > I wonder if there is any plan to support such disk driver domain? It''ll > be a great feature for my use case. > > Thanks, > TimothyHello, Driver domain support is indeed planed for libxl/xl, most kernels out there already have driver domain capabilities, it''s just a matter of creating a protocol to communicate between Dom0 and the Driver domain, so the driver domain knows which devices it has to attach and to which guest.
On Thu, Apr 4, 2013 at 7:54 PM, Roger Pau Monné <roger.pau@citrix.com> wrote:> > On 04/04/13 12:26, G.R. wrote: > > Hi developers, > > I''m currently running a PVHVM freeNAS domU to serve storage (SATA > > controller passed-through) to all other domUs. However, some issues are > > observed in the network / NFS performance for the NAS domU. > > I noticed that there is a blkback driver in freebsd which is documented > > as capable to export disk to other domains, which looks very promising. > > However, the wiki says that disk driver domain is not supported (at > > least for now). > > > > I wonder if there is any plan to support such disk driver domain? It''ll > > be a great feature for my use case. > > > > Thanks, > > Timothy > > Hello, > > Driver domain support is indeed planed for libxl/xl, most kernels out > there already have driver domain capabilities, it''s just a matter of > creating a protocol to communicate between Dom0 and the Driver domain, > so the driver domain knows which devices it has to attach and to which > guest. >Hi Roger, According to the wiki, such protocol is already available for the case of NIC emulation. Is the disk driver domain actively in development now? Or is it just on the plan? Any estimation on schedule (e.g. targeted release) will be great. Thanks, Timothy
On 04/04/2013 06:26 AM, G.R. wrote:> Hi developers, > I''m currently running a PVHVM freeNAS domU to serve storage (SATA > controller passed-through) to all other domUs. However, some issues are > observed in the network / NFS performance for the NAS domU. > I noticed that there is a blkback driver in freebsd which is documented as > capable to export disk to other domains, which looks very promising. > However, the wiki says that disk driver domain is not supported (at least > for now). > > I wonder if there is any plan to support such disk driver domain? It''ll be > a great feature for my use case. > > Thanks, > Timothy > >I have submitted a patch adding libxl support for block backend domains - the latest version is: http://lists.xen.org/archives/html/xen-devel/2013-03/msg01172.html This has been tested with Linux in-kernel blkback; you just have to disable libxl''s own hotplug execution option and the default (Linux) hotplug scripts will connect the device. I assume freebsd support can be done similarly, although perhaps it requires a helper in the domU? -- Daniel De Graaf National Security Agency
> I have submitted a patch adding libxl support for block backend domains - > the > latest version is: > > http://lists.xen.org/archives/html/xen-devel/2013-03/msg01172.html > > This has been tested with Linux in-kernel blkback; you just have to disable > libxl''s > own hotplug execution option and the default (Linux) hotplug scripts will > connect > the device. I assume freebsd support can be done similarly, although perhaps > it > requires a helper in the domU? > > -- > Daniel De Graaf > National Security Agency >Wow! That''s amazing, Daniel. This looks exactly match my intention. I would definitely have a try some day. Some questions first before I kick of my trial: 1. Your link points to the first one of a patch series. Can I assume that this is the only one relevant? 2. I guess the hotplug option you mentioned is "run_hotplug_scripts". And you suggests to set it to false. Could you share some insight about this option? I have no idea what it does and why this should be changed. 3. You mentioned some kind of helper is required in the driver domU to help setting things help. What kind of setup is expected to be handled by this helper? I really have no idea how the whole system hooks up. Thanks, Timothy
On 04/04/13 12:26, G.R. wrote:> Hi developers, > I''m currently running a PVHVM freeNAS domU to serve storage (SATA > controller passed-through) to all other domUs. However, some issues are > observed in the network / NFS performance for the NAS domU. > I noticed that there is a blkback driver in freebsd which is documented > as capable to export disk to other domains, which looks very promising. > However, the wiki says that disk driver domain is not supported (at > least for now). > > I wonder if there is any plan to support such disk driver domain? It''ll > be a great feature for my use case.I''ve added a tutorial that explains how to use storage driver domains with Xen 4.3, it contains a FreeBSD section that explains how to use ZFS ZVOLS as disk backends for other domains, see: http://wiki.xen.org/wiki/Storage_driver_domains
On Tue, May 28, 2013 at 6:16 PM, Roger Pau Monné <roger.pau@citrix.com> wrote:> On 04/04/13 12:26, G.R. wrote: >> Hi developers, >> I''m currently running a PVHVM freeNAS domU to serve storage (SATA >> controller passed-through) to all other domUs. However, some issues are >> observed in the network / NFS performance for the NAS domU. >> I noticed that there is a blkback driver in freebsd which is documented >> as capable to export disk to other domains, which looks very promising. >> However, the wiki says that disk driver domain is not supported (at >> least for now). >> >> I wonder if there is any plan to support such disk driver domain? It''ll >> be a great feature for my use case. > > I''ve added a tutorial that explains how to use storage driver domains > with Xen 4.3, it contains a FreeBSD section that explains how to use ZFS > ZVOLS as disk backends for other domains, see: > > http://wiki.xen.org/wiki/Storage_driver_domains >Hi Roger, I just checked out your wiki page and it seems to be a fairly simple setup. It looks like a miracle that the freebsd version does not require those helper utils. As I understand from what Daniel stated in his mail, the linux driver domain would rely on some hotplug scripts delivered with xen to hook up the connection. I was imagining some similar sciprts will need to be hooked into FreeBSD devd in similar manner. How does it work without these? I''m not familiar with the FreeBSD kernel. Also, does it has any assumption on the FreeBSD version? I''m on 8.3.1, which is a kind of old but still PVHVM capable. In your example, the driver is setup using zvol. I wonder if there are any constraint prohibiting using a file based backend? Finally, I saw this limitation in the wiki:>> It is not possible to use driver domains with pygrub or HVM guests yet, so it will only work with PV guests that have the kernel in Dom0.While I can imagine why pygrub does not work, I don''t understand the reason HVM is affected here. Could you explain a little bit? And what about a HVM with PV driver? (e.g. those windows guests) Thanks, Timothy
On Fri, Jun 21, 2013 at 11:26:43AM +0800, G.R. wrote:> On Tue, May 28, 2013 at 6:16 PM, Roger Pau Monné <roger.pau@citrix.com> wrote: > > On 04/04/13 12:26, G.R. wrote: > >> Hi developers, > >> I''m currently running a PVHVM freeNAS domU to serve storage (SATA > >> controller passed-through) to all other domUs. However, some issues are > >> observed in the network / NFS performance for the NAS domU. > >> I noticed that there is a blkback driver in freebsd which is documented > >> as capable to export disk to other domains, which looks very promising. > >> However, the wiki says that disk driver domain is not supported (at > >> least for now). > >> > >> I wonder if there is any plan to support such disk driver domain? It''ll > >> be a great feature for my use case. > > > > I''ve added a tutorial that explains how to use storage driver domains > > with Xen 4.3, it contains a FreeBSD section that explains how to use ZFS > > ZVOLS as disk backends for other domains, see: > > > > http://wiki.xen.org/wiki/Storage_driver_domains > > > > Hi Roger, > > I just checked out your wiki page and it seems to be a fairly simple setup. > It looks like a miracle that the freebsd version does not require > those helper utils. > > As I understand from what Daniel stated in his mail, the linux driver > domain would > rely on some hotplug scripts delivered with xen to hook up the connection. > I was imagining some similar sciprts will need to be hooked into > FreeBSD devd in similar manner. > How does it work without these? I''m not familiar with the FreeBSD kernel. > Also, does it has any assumption on the FreeBSD version? I''m on 8.3.1, > which is a kind of old but still PVHVM capable. > > In your example, the driver is setup using zvol. I wonder if there are > any constraint prohibiting using a file based backend? > > Finally, I saw this limitation in the wiki: > >> It is not possible to use driver domains with pygrub or HVM guests yet, so it will only work with PV guests that have the kernel in Dom0. > While I can imagine why pygrub does not work, I don''t understand the > reason HVM is affected here. Could you explain a little bit? > And what about a HVM with PV driver? (e.g. those windows guests)It needs the backends. That means you can do it with PVHVM or with PV guests.> > Thanks, > Timothy > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel >
On 21/06/13 05:26, G.R. wrote:> On Tue, May 28, 2013 at 6:16 PM, Roger Pau Monné <roger.pau@citrix.com> wrote: >> On 04/04/13 12:26, G.R. wrote: >>> Hi developers, >>> I''m currently running a PVHVM freeNAS domU to serve storage (SATA >>> controller passed-through) to all other domUs. However, some issues are >>> observed in the network / NFS performance for the NAS domU. >>> I noticed that there is a blkback driver in freebsd which is documented >>> as capable to export disk to other domains, which looks very promising. >>> However, the wiki says that disk driver domain is not supported (at >>> least for now). >>> >>> I wonder if there is any plan to support such disk driver domain? It''ll >>> be a great feature for my use case. >> >> I''ve added a tutorial that explains how to use storage driver domains >> with Xen 4.3, it contains a FreeBSD section that explains how to use ZFS >> ZVOLS as disk backends for other domains, see: >> >> http://wiki.xen.org/wiki/Storage_driver_domains >> > > Hi Roger, > > I just checked out your wiki page and it seems to be a fairly simple setup. > It looks like a miracle that the freebsd version does not require > those helper utils. > > As I understand from what Daniel stated in his mail, the linux driver > domain would > rely on some hotplug scripts delivered with xen to hook up the connection. > I was imagining some similar sciprts will need to be hooked into > FreeBSD devd in similar manner. > How does it work without these?Linux use this hotplug scripts to write the physical-device node, which FreeBSD doesn''t need.> I''m not familiar with the FreeBSD kernel. > Also, does it has any assumption on the FreeBSD version? I''m on 8.3.1, > which is a kind of old but still PVHVM capable.If it contains blkback you should be fine.> In your example, the driver is setup using zvol. I wonder if there are > any constraint prohibiting using a file based backend?I have not tried it, but FreeBSD blkback should be able to handle raw files also.> Finally, I saw this limitation in the wiki: >>> It is not possible to use driver domains with pygrub or HVM guests yet, so it will only work with PV guests that have the kernel in Dom0. > While I can imagine why pygrub does not work, I don''t understand the > reason HVM is affected here. Could you explain a little bit? > And what about a HVM with PV driver? (e.g. those windows guests)If you use HVM, Qemu needs to access the block/file used as disk, so if the disk is on another domain Qemu has no way to access it (unless you plug the disk to the Dom0 and then pass the created block device /dev/xvd* to Qemu).
Thanks so much,Roger. You clarified all my confusions. I think mounting the disk in dom0 first could serve as an acceptable workaround for HVM. So anxious to see the new release.> > While I can imagine why pygrub does not work, I don''t understand the > > reason HVM is affected here. Could you explain a little bit? > > And what about a HVM with PV driver? (e.g. those windows guests) > > If you use HVM, Qemu needs to access the block/file used as disk, so if > the disk is on another domain Qemu has no way to access it (unless you > plug the disk to the Dom0 and then pass the created block device > /dev/xvd* to Qemu). >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
> >> In your example, the driver is setup using zvol. I wonder if there are >> any constraint prohibiting using a file based backend? > > I have not tried it, but FreeBSD blkback should be able to handle raw > files also. > >> Finally, I saw this limitation in the wiki: >>>> It is not possible to use driver domains with pygrub or HVM guests yet, so it will only work with PV guests that have the kernel in Dom0. >> While I can imagine why pygrub does not work, I don''t understand the >> reason HVM is affected here. Could you explain a little bit? >> And what about a HVM with PV driver? (e.g. those windows guests) > > If you use HVM, Qemu needs to access the block/file used as disk, so if > the disk is on another domain Qemu has no way to access it (unless you > plug the disk to the Dom0 and then pass the created block device > /dev/xvd* to Qemu). >I just upgrade to xen 4.3 and here is my quick report. With some preliminary test, I can confirm that freebsd 8.3 is able to serve as disk backend, exporting file as a disk. I was able to block-attach it and mount it in dom0. However, there are some issues with the xl block-list / block-detach command. The attached disk cannot be listed. And block-detach will report fail when I try to get it removed. However, it appears to have removed the data in xenstored in spite of the remove error report. And I was able to re-attach the same file again later. There is also an issue about block-attach -- it does not prevent me attaching the same file twice accidentally. This appears to crash the whole system later when I tried some further operations. I believe I should be able to serve HVM domU in this way (using dom0 as a proxy). But this appears to introduce some overhead (dom0 proxy). I wonder if it will be hard to provide HVM support for disk domain? Will it be able to provide dual-interface like hda/xvda so the overhead is eliminated after PV driver is loaded? It will be great if you can share your plan / schedule about further enhancement of this feature. Thanks, Timothy
On 24/07/13 18:17, G.R. wrote:>> >>> In your example, the driver is setup using zvol. I wonder if there are >>> any constraint prohibiting using a file based backend? >> >> I have not tried it, but FreeBSD blkback should be able to handle raw >> files also. >> >>> Finally, I saw this limitation in the wiki: >>>>> It is not possible to use driver domains with pygrub or HVM guests yet, so it will only work with PV guests that have the kernel in Dom0. >>> While I can imagine why pygrub does not work, I don''t understand the >>> reason HVM is affected here. Could you explain a little bit? >>> And what about a HVM with PV driver? (e.g. those windows guests) >> >> If you use HVM, Qemu needs to access the block/file used as disk, so if >> the disk is on another domain Qemu has no way to access it (unless you >> plug the disk to the Dom0 and then pass the created block device >> /dev/xvd* to Qemu). >> > > I just upgrade to xen 4.3 and here is my quick report. > > With some preliminary test, I can confirm that freebsd 8.3 is able to > serve as disk backend, exporting file as a disk. > I was able to block-attach it and mount it in dom0. > However, there are some issues with the xl block-list / block-detach command. > The attached disk cannot be listed. And block-detach will report fail > when I try to get it removed.This is probably due to block-list/attach commands making assumptions about the backend domain always being Dom0. I will take a look, thanks for the report.> However, it appears to have removed the data in xenstored in spite of > the remove error report. > And I was able to re-attach the same file again later. > > There is also an issue about block-attach -- it does not prevent me > attaching the same file twice accidentally.This kind of check should be implemented in FreeBSD itself rather than the toolstack. I will take a look into FreeBSD blkback in order to prevent it from attaching the same disk twice.> This appears to crash the whole system later when I tried some further > operations. > > I believe I should be able to serve HVM domU in this way (using dom0 > as a proxy). > But this appears to introduce some overhead (dom0 proxy). > I wonder if it will be hard to provide HVM support for disk domain?This is not possible due to the fact than when using Qemu the Qemu process in Dom0 needs access to the disk you are attaching to the guest. For PVHVM guests this is not a big deal, because the emulated device will only be used for the bootloader, and then the OS switches to PV, which doesn''t use Dom0 as a proxy. The only improvement here would be to auto attach the disk to Dom0 in order to launch Qemu, which now has to be done manually. The same happens with PV domains that use pygrub.> Will it be able to provide dual-interface like hda/xvda so the > overhead is eliminated after PV driver is loaded?Exactly.> It will be great if you can share your plan / schedule about further > enhancement of this feature.Next steps are fixing the bugs you describe, and allowing driver domains to use userspace backends (Qdisk for instance). Roger.
>> I believe I should be able to serve HVM domU in this way (using dom0 >> as a proxy). >> But this appears to introduce some overhead (dom0 proxy). >> I wonder if it will be hard to provide HVM support for disk domain? > > This is not possible due to the fact than when using Qemu the Qemu > process in Dom0 needs access to the disk you are attaching to the guest. > For PVHVM guests this is not a big deal, because the emulated device > will only be used for the bootloader, and then the OS switches to PV, > which doesn''t use Dom0 as a proxy. > > The only improvement here would be to auto attach the disk to Dom0 in > order to launch Qemu, which now has to be done manually. The same > happens with PV domains that use pygrub.Thanks for your comment, Roger. But I''m not sure how to specify the domain config in this case. Previously the dual-interface is handled by the toolchain, generated from the same ''disk'' config entry. In this case, the qemu will expect config in the form of vdev=hda,target=/dev/xvda, while the pv-driver should use the config in the form of vdev=xvda,domain=<driver>,target=/path/in/driver/domain How does the toolchain know that these two are actually the same disk? Also, will this be another "attach twice" case? Any potential consistency issue? (Since this bypasses NFS that deals with consistency)
On Thu, Jul 25, 2013 at 9:31 AM, Roger Pau Monné <roger.pau@citrix.com> wrote:> On 24/07/13 18:17, G.R. wrote: >>> >>>> In your example, the driver is setup using zvol. I wonder if there are >>>> any constraint prohibiting using a file based backend? >>> >>> I have not tried it, but FreeBSD blkback should be able to handle raw >>> files also. >>> >>>> Finally, I saw this limitation in the wiki: >>>>>> It is not possible to use driver domains with pygrub or HVM guests yet, so it will only work with PV guests that have the kernel in Dom0. >>>> While I can imagine why pygrub does not work, I don''t understand the >>>> reason HVM is affected here. Could you explain a little bit? >>>> And what about a HVM with PV driver? (e.g. those windows guests) >>> >>> If you use HVM, Qemu needs to access the block/file used as disk, so if >>> the disk is on another domain Qemu has no way to access it (unless you >>> plug the disk to the Dom0 and then pass the created block device >>> /dev/xvd* to Qemu). >>> >> >> I just upgrade to xen 4.3 and here is my quick report. >> >> With some preliminary test, I can confirm that freebsd 8.3 is able to >> serve as disk backend, exporting file as a disk. >> I was able to block-attach it and mount it in dom0. >> However, there are some issues with the xl block-list / block-detach command. >> The attached disk cannot be listed. And block-detach will report fail >> when I try to get it removed. > > This is probably due to block-list/attach commands making assumptions > about the backend domain always being Dom0. I will take a look, thanks > for the report.Yes, I also discovered this recently when using a network driver domain -- the *-list commands only look in dom0 for stuff. That will have to be sorted out for 4.4. -George
On Thu, Jul 25, 2013 at 11:35 PM, G.R. <firemeteor@users.sourceforge.net> wrote:>>> I believe I should be able to serve HVM domU in this way (using dom0 >>> as a proxy). >>> But this appears to introduce some overhead (dom0 proxy). >>> I wonder if it will be hard to provide HVM support for disk domain? >> >> This is not possible due to the fact than when using Qemu the Qemu >> process in Dom0 needs access to the disk you are attaching to the guest. >> For PVHVM guests this is not a big deal, because the emulated device >> will only be used for the bootloader, and then the OS switches to PV, >> which doesn''t use Dom0 as a proxy. >> >> The only improvement here would be to auto attach the disk to Dom0 in >> order to launch Qemu, which now has to be done manually. The same >> happens with PV domains that use pygrub. > > Thanks for your comment, Roger. > But I''m not sure how to specify the domain config in this case. > Previously the dual-interface is handled by the toolchain, generated > from the same ''disk'' config entry. > In this case, the qemu will expect config in the form of > vdev=hda,target=/dev/xvda, while the pv-driver should use the config > in the form of vdev=xvda,domain=<driver>,target=/path/in/driver/domain > How does the toolchain know that these two are actually the same disk? > > Also, will this be another "attach twice" case? Any potential > consistency issue? (Since this bypasses NFS that deals with > consistency)Hi Roger, Could you comment on my questions above? I would be nice if you could provide an example config to make the dual-interface work. For the consistency issue, I also care about the interoperability with NFS -- Can I share the same disk image through NFS while having it mounted through blk-backend? Thanks, Timothy