Hello Everyone I am trying to boot to a FreeBSD HVM guest on an RHEL5 Dom0. I have around 2GB memory installed in the system and below is my xen config file. name = "FreeBSD7" builder = "hvm" memory = "256" disk = [''file:/home/freebsd.img,ioemu:hda, w''] vif = [ ''type=ioemu,bridge=eth1'' ] device_model = "/usr/lib64/xen/bin/qemu-dm" kernel = "/usr/lib/xen/boot/hvmloader" vnc=0 sdl=1 boot="c" vcpus=2 acpi="0" on_reboot = ''restart'' on_crash = ''restart'' I am able to boot to FreeBSD guest OS with the above configuration. Things work well. But as soon as I add a new line to the above config to hide one pci device, [pci=["0000:xx:00.0" ] ] the FreeBSD booting becomes almost 10 times slow. Any idea what could be the reason ? Going through a couple of other responses here, I could figure out that the Disk IO reads will be pretty slow from an HVM guest without PV drivers. But I am not able understand why this happens only when I try to hide a device. I was wondering if anyone had any information on this. Please post your responses if you could think of any possible cause for this or you have some suggestion to make my FreeBSD guest faster. Regards, mjm _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi John, I''d like to clarify a few things to make sure we''re on the same page here... On Thursday 21 August 2008, John Mathews wrote:> Hello Everyone > > I am trying to boot to a FreeBSD HVM guest on an RHEL5 Dom0. > I have around 2GB memory installed in the system and below is my xen config<snip> Looks fine.> I am able to boot to FreeBSD guest OS with the above configuration. Things > work well. > > But as soon as I add a new line to the above config to hide one pci device, > [pci=["0000:xx:00.0" ] ] > the FreeBSD booting becomes almost 10 times slow. Any idea what could be > the reason ?Just to be clear, that line doesn''t "hide" anything on its own. It''s a directive to pass through a PCI device from your *host* system to the guest. It''s assumed that there is not a driver in dom0 holding access to that PCI device - you arrange for that to be true by "hiding" the device from the driver in dom0. You can hide a device on the dom0 kernel commandline but this doesn''t work if the Xen pciback driver is a module on your system. In that case you need to manually rebind the driver or fiddle around with your dom0 configuration files a bit. So, putting that line down ought to pass through a host device. I don''t *actually* know what happens if you try to pass through a device which you haven''t "hidden" / rebound in dom0. I doubt it''d be what you intended though - it''s conceivable (to me) that if the device is in use in dom0 then you might be getting some timeouts as it tries (and fails) to talk to the device. I''m assuming (hopefully?) that the code won''t let two domains actually *fight* over a device! ;-)> Going through a couple of other responses here, I could figure out that the > Disk IO reads will be > pretty slow from an HVM guest without PV drivers. But I am not able > understand why this happens > only when I try to hide a device.I think, as you say, it''s unlikely to be this since it only manifests with the PCI passthrough line in the config file. Could you please clarify what the PCI config line was supposed to do and if anything I''ve said sounds odd or new to you?> I was wondering if anyone had any information on this. Please post your > responses if you could > think of any possible cause for this or you have some suggestion to make my > FreeBSD guest faster.I guess the ideal way to make the guest faster would be to get someone to port the PV drivers to run under FreeBSD. There was an existing paravirt FreeBSD port which could be drawn upon here but - ironically - one of the things that kept it out of the FreeBSD mainline was the need to modify the drivers support FreeBSD''s Newbus architecture. I guess this would still be a problem now with respect to mainlining - however, the PV drivers definitely *worked* in the PV port at one stage. We (as a community) still would need to find someone who''d take on this work though :-/ Other than that, I''m afraid all I can suggest is that you apply any FreeBSD / Xen / virtualisation tuning tips you can find and see what effect they have. HVM usually hurts most in networking performance. HVM also has fairly limited GUI performance so you may find (despite the network limitations) a networked GUI like X11-over-SSH or Nomachine X (if FreeBSD can run it) would work best. Hope that helps, Cheers, Mark -- Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks Mark..... Please see my comments inline... On Thu, Aug 21, 2008 at 11:44 AM, Mark Williamson < mark.williamson@cl.cam.ac.uk> wrote:> Hi John, > > I''d like to clarify a few things to make sure we''re on the same page > here... > > On Thursday 21 August 2008, John Mathews wrote: > > Hello Everyone > > > > I am trying to boot to a FreeBSD HVM guest on an RHEL5 Dom0. > > I have around 2GB memory installed in the system and below is my xen > config > > <snip> > > Looks fine. > > > I am able to boot to FreeBSD guest OS with the above configuration. > Things > > work well. > > > > But as soon as I add a new line to the above config to hide one pci > device, > > [pci=["0000:xx:00.0" ] ] > > the FreeBSD booting becomes almost 10 times slow. Any idea what could be > > the reason ? > > Just to be clear, that line doesn''t "hide" anything on its own. It''s a > directive to pass through a PCI device from your *host* system to the > guest. > It''s assumed that there is not a driver in dom0 holding access to that PCI > device - you arrange for that to be true by "hiding" the device from the > driver in dom0. > > You can hide a device on the dom0 kernel commandline but this doesn''t work > if > the Xen pciback driver is a module on your system. In that case you need > to > manually rebind the driver or fiddle around with your dom0 configuration > files a bit. >Sorry, I forgot to mention about the pciback module part. I run the pciback as a module in Dom0 and I do run the below commands before I start the guest OS. modprobe pciback echo -n 0000:xx:00.0 > /sys/bus/pci/drivers/pciback/new_slot echo -n 0000:xx:00.0 > /sys/bus/pci/drivers/pciback/bind So i guess this ensures that this device is hidden from dom0. How can I verify that this device is really hidden from dom0 ? If I do an lspci from dom0 to dump the pci config space of this deivce after its hidden, should that work ?> So, putting that line down ought to pass through a host device. I don''t > *actually* know what happens if you try to pass through a device which you > haven''t "hidden" / rebound in dom0. I doubt it''d be what you intended > though - it''s conceivable (to me) that if the device is in use in dom0 then > you might be getting some timeouts as it tries (and fails) to talk to the > device. I''m assuming (hopefully?) that the code won''t let two domains > actually *fight* over a device! ;-) > > > Going through a couple of other responses here, I could figure out that > the > > Disk IO reads will be > > pretty slow from an HVM guest without PV drivers. But I am not able > > understand why this happens > > only when I try to hide a device. > > I think, as you say, it''s unlikely to be this since it only manifests with > the > PCI passthrough line in the config file. > > Could you please clarify what the PCI config line was supposed to do and if > anything I''ve said sounds odd or new to you?My intention is just to make sure that my PCI device is hidden from DOM0 and is visible in my FreeBSD guest. So If I hide my device in DOM0 and direct xen to enable the PCI passthrough with the PCI line in the config file, the FreeBSD guest boots very slow. And if I comment out the PCI line in Config file, it boots superfast. Do you think the PCI passthrough logic in Xen can by any chance degrade the Disk IO performance for HVM guests ? (Just a wild guess. I am not that good with the xen code.)> > > > I was wondering if anyone had any information on this. Please post your > > responses if you could > > think of any possible cause for this or you have some suggestion to make > my > > FreeBSD guest faster. > > I guess the ideal way to make the guest faster would be to get someone to > port > the PV drivers to run under FreeBSD. There was an existing paravirt > FreeBSD > port which could be drawn upon here but - ironically - one of the things > that > kept it out of the FreeBSD mainline was the need to modify the drivers > support FreeBSD''s Newbus architecture. I guess this would still be a > problem > now with respect to mainlining - however, the PV drivers definitely > *worked* > in the PV port at one stage. We (as a community) still would need to find > someone who''d take on this work though :-/> Other than that, I''m afraid all I can suggest is that you apply any FreeBSD > / > Xen / virtualisation tuning tips you can find and see what effect they > have. > HVM usually hurts most in networking performance. HVM also has fairly > limited GUI performance so you may find (despite the network limitations) a > networked GUI like X11-over-SSH or Nomachine X (if FreeBSD can run it) > would > work best. > > Hope that helps, > Cheers, > Mark > > -- > Push Me Pull You - Distributed SCM tool ( > http://www.cl.cam.ac.uk/~maw48/pmpu/<http://www.cl.cam.ac.uk/%7Emaw48/pmpu/> > ) >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi,> Sorry, I forgot to mention about the pciback module part. > I run the pciback as a module in Dom0 and I do run the below commands > before I start the guest OS. > > modprobe pciback > echo -n 0000:xx:00.0 > /sys/bus/pci/drivers/pciback/new_slot > echo -n 0000:xx:00.0 > /sys/bus/pci/drivers/pciback/bind > > So i guess this ensures that this device is hidden from dom0.You also need to unbind it from any existing device driver that may have grabbed it on behalf of dom0. The disadvantage of having pciback as a module is that any "real" device driver in dom0 that can handle that device is likely to grab it and not let go...> How can I verify that this device is really hidden from dom0 ? If I do an > lspci from dom0 to dump the pci config space of this deivce after its > hidden, should that work ?Yes, that would work regardless, I think. The way to tell would be to do an: ls -l /sys/bus/pci/devices/<the id>/ and take a look at where the "driver" symlink is pointing. If it''s pointing to the pciback driver''s directory, you''re all set. If it''s pointing to some other driver''s directory you need to go there and unbind the device, then rebind it to pciback. If there''s no "driver" symlink you should just need to bind it to pciback. Also, take a look in dmesg to see if pciback is spitting out any useful debug output.> My intention is just to make sure that my PCI device is hidden from DOM0 > and is visible in my FreeBSD guest. > So If I hide my device in DOM0 and direct xen to enable the PCI passthrough > with the PCI line in the config file, the FreeBSD guest boots very slow. > And if I comment out the PCI line in Config file, it boots superfast. > > Do you think the PCI passthrough logic in Xen can by any chance degrade the > Disk IO performance for HVM guests ? (Just a wild guess. I am not that good > with the xen code.)Well, I''d be slightly surprised if the disk IO performance was directly affected but it''s certainly possible that attempting PCI passthrough is *somehow* slowing down *something* (I can think of a few general ways this might happen but nothing very directed). I.e. definitely sounds like the cause but exactly why is hard for me to suggest at this point... So yes, I think your guess is justified although the exact causality is hard to say. Just another quick check for me: is your hardware VT-d capable? (that''s Intel''s IOMMU technology). It''s needed for successful passthrough to HVM guests. Have you tried it with other guest OSes e.g. Linux? It''s possible it''s some kind of FreeBSD-specific bug in the PCI passthrough... Cheers, Mark> > > I was wondering if anyone had any information on this. Please post your > > > responses if you could > > > think of any possible cause for this or you have some suggestion to > > > make > > > > my > > > > > FreeBSD guest faster. > > > > I guess the ideal way to make the guest faster would be to get someone to > > port > > the PV drivers to run under FreeBSD. There was an existing paravirt > > FreeBSD > > port which could be drawn upon here but - ironically - one of the things > > that > > kept it out of the FreeBSD mainline was the need to modify the drivers > > support FreeBSD''s Newbus architecture. I guess this would still be a > > problem > > now with respect to mainlining - however, the PV drivers definitely > > *worked* > > in the PV port at one stage. We (as a community) still would need to > > find someone who''d take on this work though :-/ > > > > > > Other than that, I''m afraid all I can suggest is that you apply any > > FreeBSD / > > Xen / virtualisation tuning tips you can find and see what effect they > > have. > > HVM usually hurts most in networking performance. HVM also has fairly > > limited GUI performance so you may find (despite the network limitations) > > a networked GUI like X11-over-SSH or Nomachine X (if FreeBSD can run it) > > would > > work best. > > > > Hope that helps, > > Cheers, > > Mark > > > > -- > > Push Me Pull You - Distributed SCM tool ( > > http://www.cl.cam.ac.uk/~maw48/pmpu/<http://www.cl.cam.ac.uk/%7Emaw48/pmp > >u/> )-- Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, On Thu, Aug 21, 2008 at 3:02 PM, Mark Williamson < mark.williamson@cl.cam.ac.uk> wrote:> Hi, > > > Sorry, I forgot to mention about the pciback module part. > > I run the pciback as a module in Dom0 and I do run the below commands > > before I start the guest OS. > > > > modprobe pciback > > echo -n 0000:xx:00.0 > /sys/bus/pci/drivers/pciback/new_slot > > echo -n 0000:xx:00.0 > /sys/bus/pci/drivers/pciback/bind > > > > So i guess this ensures that this device is hidden from dom0. > > You also need to unbind it from any existing device driver that may have > grabbed it on behalf of dom0. The disadvantage of having pciback as a > module > is that any "real" device driver in dom0 that can handle that device is > likely to grab it and not let go... > > > How can I verify that this device is really hidden from dom0 ? If I do an > > lspci from dom0 to dump the pci config space of this deivce after its > > hidden, should that work ? > > Yes, that would work regardless, I think. > > The way to tell would be to do an: > > ls -l /sys/bus/pci/devices/<the id>/ > > and take a look at where the "driver" symlink is pointing. If it''s > pointing > to the pciback driver''s directory, you''re all set. If it''s pointing to > some > other driver''s directory you need to go there and unbind the device, then > rebind it to pciback. If there''s no "driver" symlink you should just need > to > bind it to pciback. > > Also, take a look in dmesg to see if pciback is spitting out any useful > debug > output. > > > My intention is just to make sure that my PCI device is hidden from DOM0 > > and is visible in my FreeBSD guest. > > So If I hide my device in DOM0 and direct xen to enable the PCI > passthrough > > with the PCI line in the config file, the FreeBSD guest boots very slow. > > And if I comment out the PCI line in Config file, it boots superfast. > > > > Do you think the PCI passthrough logic in Xen can by any chance degrade > the > > Disk IO performance for HVM guests ? (Just a wild guess. I am not that > good > > with the xen code.) > > Well, I''d be slightly surprised if the disk IO performance was directly > affected but it''s certainly possible that attempting PCI passthrough is > *somehow* slowing down *something* (I can think of a few general ways this > might happen but nothing very directed). I.e. definitely sounds like the > cause but exactly why is hard for me to suggest at this point... > > So yes, I think your guess is justified although the exact causality is > hard > to say. > > Just another quick check for me: is your hardware VT-d capable? (that''s > Intel''s IOMMU technology). It''s needed for successful passthrough to HVM > guests. Have you tried it with other guest OSes e.g. Linux? It''s possible > it''s some kind of FreeBSD-specific bug in the PCI passthrough... >Yes my hardware is VT-d enabled. I have linux HVM guest and I dont see this problem. Moreover freeBSD boots fine with 1 CPU, even after enabling passthrough. But if I make the vcpu to 2 then FreeBSd gets slow on passthrough.> > Cheers, > Mark > > > > > I was wondering if anyone had any information on this. Please post > your > > > > responses if you could > > > > think of any possible cause for this or you have some suggestion to > > > > make > > > > > > my > > > > > > > FreeBSD guest faster. > > > > > > I guess the ideal way to make the guest faster would be to get someone > to > > > port > > > the PV drivers to run under FreeBSD. There was an existing paravirt > > > FreeBSD > > > port which could be drawn upon here but - ironically - one of the > things > > > that > > > kept it out of the FreeBSD mainline was the need to modify the drivers > > > support FreeBSD''s Newbus architecture. I guess this would still be a > > > problem > > > now with respect to mainlining - however, the PV drivers definitely > > > *worked* > > > in the PV port at one stage. We (as a community) still would need to > > > find someone who''d take on this work though :-/ > > > > > > > > > Other than that, I''m afraid all I can suggest is that you apply any > > > FreeBSD / > > > Xen / virtualisation tuning tips you can find and see what effect they > > > have. > > > HVM usually hurts most in networking performance. HVM also has fairly > > > limited GUI performance so you may find (despite the network > limitations) > > > a networked GUI like X11-over-SSH or Nomachine X (if FreeBSD can run > it) > > > would > > > work best. > > > > > > Hope that helps, > > > Cheers, > > > Mark > > > > > > -- > > > Push Me Pull You - Distributed SCM tool ( > > > http://www.cl.cam.ac.uk/~maw48/pmpu/<http://www.cl.cam.ac.uk/%7Emaw48/pmpu/> > <http://www.cl.cam.ac.uk/%7Emaw48/pmp > > >u/> ) > > > > -- > Push Me Pull You - Distributed SCM tool ( > http://www.cl.cam.ac.uk/~maw48/pmpu/<http://www.cl.cam.ac.uk/%7Emaw48/pmpu/> > ) >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users