I''ve got it booting. It sees a network interface on eth0, but it doesn''t have connectivity. DHCP doesn''t get an address, and manually assigning one doesn''t help. I can access the console with VNC, so I know it''s up and running. Just not sure how to debug the network. It''s almost like dom0 (Opensolaris) isn''t routing the packets. OpenSolaris b123 dom0 Ubuntu 9.10 installed via hvm I modified the config file and re-imported it with virsh to set up PV mode. The config follows. <name>ubuntu-pv</name> <uuid>c12e6738-9a17-330c-dba2-354436f56acf</uuid> <os> <type>linux</type> <kernel>/xen/guests/ubuntu/vmlinuz-2.6.28-11-server</kernel> <initrd>/xen/guests/ubuntu/initrd.img-2.6.28-11-server</initrd> <cmdline>root=/dev/xvda1 rw console=hvc0</cmdline> </os> <memory>2097152</memory> <vcpu>2</vcpu> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <interface type=''bridge''> <source bridge=''e1000g0''/> <target dev=''vif5.0''/> <mac address=''00:16:3e:2e:11:ed''/> <script path=''vif-vnic''/> </interface> <disk type=''block'' device=''disk''> <driver name=''phy''/> <source dev=''/dev/zvol/dsk/rpool/xen/ubuntu''/> <target dev=''xvda''/> </disk> <input type=''mouse'' bus=''ps2''/> <console tty=''/dev/pts/7''/> <graphics type=''vnc'' port=''-1''/> </devices> </domain> -- This message posted from opensolaris.org
I don''t know if anyone else can see the config data, but I can''t now. Here''s some output from xm list. I tried removing the network interface from the xml, using virsh define to load the xml, then using virsh attach-interface to connect it. The virsh call outputs a message saying it attached successfully, but I still don''t get any IP connectivity. The same config in hvm mode works, I''m not sure what''s up here. Running dmesg in the VM shows Linux connecting to the xen network interface properly. No errors on the Linux side, just no packets. xm list -l ubuntu-pv (domain (on_crash destroy) (uuid c12e6738-9a17-330c-dba2-354436f56acf) (bootloader_args ) (vcpus 2) (name ubuntu-pv) (on_poweroff destroy) (on_reboot restart) (bootloader ) (maxmem 2048) (memory 2048) (shadow_memory 0) (features ) (on_xend_start ignore) (on_xend_stop shutdown) (start_time 1253997268.64) (cpu_time 4.963976574) (online_vcpus 2) (image (linux (kernel /xen/guests/ubuntu/vmlinuz-2.6.28-11-server) (ramdisk /xen/guests/ubuntu/initrd.img-2.6.28-11-server) (args ''root=/dev/xvda1 rw console=hvc0'') (rtc_timeoffset 0) (localtime 0) (device_model /usr/lib/xen/bin/qemu-dm) (keymap en-us) (notes (HV_START_LOW 18446603336221196288) (FEATURES ''!writable_page_tables|pae_pgdir_above_4gb'') (VIRT_BASE 18446744071562067968) (GUEST_VERSION 2.6) (PADDR_OFFSET 0) (GUEST_OS linux) (HYPERCALL_PAGE 18446744071564201984) (LOADER generic) (SUSPEND_CANCEL 1) (PAE_MODE yes) (ENTRY 18446744071572460032) (XEN_VERSION xen-3.0) ) ) ) (status 0) (store_mfn 1206754) (console_mfn 1206753) (device (vif (bridge e1000g0) (uuid 4b72fe24-5675-e079-4b22-677b369640c9) (script /usr/lib/xen/scripts/vif-vnic) (devid 0) (mac 00:16:3e:42:1d:71) (backend 0) ) ) (device (vbd (protocol x86_64-abi) (uuid f97f440f-a140-0190-983b-0b742d8da919) (bootable 1) (devid 51712) (driver paravirtualised) (dev xvda:disk) (uname phy:/dev/zvol/dsk/rpool/xen/ubuntu) (mode w) (backend 0) ) ) (device (vkbd (devid 0) (uuid 99421333-6238-798a-5426-bc4c2becbf6a) (backend 0)) ) (device (vfb (vncunused 1) (uuid e24d6808-bdd9-2d7b-e1ea-2ce2041b6f61) (devid 0) (location localhost:5901) (type vnc) ) ) (device (console (devid 0) (protocol vt100) (location 2) (uuid 2260a303-8ce2-7751-6d46-6dd8cd9e892f) ) ) ) -- This message posted from opensolaris.org
> kernel>/xen/guests/ubuntu/vmlinuz-2.6.28-11-server</ke > rnel> > > initrd>/xen/guests/ubuntu/initrd.img-2.6.28-11-server< > /initrd> > <cmdline>root=/dev/xvda1 rw > console=hvc0</cmdline> > </os> > <memory>2097152</memory> > <vcpu>2</vcpu> > <on_poweroff>destroy</on_poweroff> > <on_reboot>restart</on_reboot> > <on_crash>destroy</on_crash> > <devices> > <interface type=''bridge''> > <source bridge=''e1000g0''/> > <target dev=''vif5.0''/> > <mac address=''00:16:3e:2e:11:ed''/> > <script path=''vif-vnic''/> > </interface> > <disk type=''block'' device=''disk''> > <driver name=''phy''/> > <source dev=''/dev/zvol/dsk/rpool/xen/ubuntu''/> > <target dev=''xvda''/> > </disk> > <input type=''mouse'' bus=''ps2''/> > <console tty=''/dev/pts/7''/> > <graphics type=''vnc'' port=''-1''/> > /devices> > </domain> Does disabling checksum offloading at Ubuntu 9.04 DomU help ? -- This message posted from opensolaris.org
Just another question. Can you load DomU via pygrub (without network) ? -- This message posted from opensolaris.org
No, disabling checksum offloading doesn''t help. I''m not sure how to load the domU with pygrub. Any info? I''ll google as well. It is 9.04, not sure why I thought it was 9.10. -- This message posted from opensolaris.org
name="PVM" memory=2048 disk = [''phy:/dev/sdc7,xvda,w'' ] vif = [ '' '' ] bootloader = "/usr/local/bin/pygrub" vcpus=2 on_reboot = ''restart'' on_crash = ''restart'' --- On Sun, 9/27/09, Travis Tabbal <travis@tabbal.net> wrote: From: Travis Tabbal <travis@tabbal.net> Subject: Re: [xen-discuss] Network issues in Ubuntu PV To: xen-discuss@opensolaris.org Date: Sunday, September 27, 2009, 11:08 PM No, disabling checksum offloading doesn''t help. I''m not sure how to load the domU with pygrub. Any info? I''ll google as well. It is 9.04, not sure why I thought it was 9.10. -- This message posted from opensolaris.org _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
I can get it to start up. But I can''t actually do anything with it. virsh console shows me kernel boot messages, but it won''t display a login prompt. VNC usually works, but in this case it''s doing the same thing. I can see some kernel boot messages, but no login prompt. I''ve tried hitting enter a few times, but nothing ever comes up. For the config, it''s the same as the pv mode using the local copy of the kernel. I just removed the kernel and initrd and the network interface from the xml and added the bootloader pointing to pygrub. pygrub is in /usr/lib/xen/bin on my system. -- This message posted from opensolaris.org
What i am asking you - does serial console work for pygrub loader :- xm create -c UbuntuPV.pyrun ? vfb is not in profile. It''s not about virsh&vnc and working via virt-manager. --- On Mon, 9/28/09, Travis Tabbal <travis@tabbal.net> wrote: From: Travis Tabbal <travis@tabbal.net> Subject: Re: [xen-discuss] Network issues in Ubuntu PV To: xen-discuss@opensolaris.org Date: Monday, September 28, 2009, 11:56 AM I can get it to start up. But I can''t actually do anything with it. virsh console shows me kernel boot messages, but it won''t display a login prompt. VNC usually works, but in this case it''s doing the same thing. I can see some kernel boot messages, but no login prompt. I''ve tried hitting enter a few times, but nothing ever comes up. For the config, it''s the same as the pv mode using the local copy of the kernel. I just removed the kernel and initrd and the network interface from the xml and added the bootloader pointing to pygrub. pygrub is in /usr/lib/xen/bin on my system. -- This message posted from opensolaris.org _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
After tweaking the config a little, yes. I can boot with pygrub using xm create and the serial console does work. No network, but it''s also not defined with one so I expected that. At least I can get to the console now though. What''s different about loading it this way? Is there some way to convert this setup to run under virsh? -- This message posted from opensolaris.org
Travis Tabbal wrote:> After tweaking the config a little, yes. I can boot with pygrub using xm create and the serial console does work. No network, but it''s also not defined with one so I expected that. At least I can get to the console now though. > > What''s different about loading it this way? Is there some way to convert this setup to run under virsh?For Debian Lenny its necessary to add console=hvc0 xencons=tty to the kernel line in /boot/grub/menu.lst I think the Linux people changed it again but iirc the console is always on hvc0 (worst comes to the worst you can just add a tty on hvc0 from /etc/inittab) I found it easier to install a centos 5 and then debootstrap Debian from that.(attach the disks with ''virsh attach-disk'' or ''xm block-attach'') You can do that while its running pv. Its alot less painful than installing hvm and trying to convert. Sam
"xm list -l UbuntuPV ( with serial console)" wanted. --- On Mon, 9/28/09, Travis Tabbal <travis@tabbal.net> wrote: From: Travis Tabbal <travis@tabbal.net> Subject: Re: [xen-discuss] Network issues in Ubuntu PV To: xen-discuss@opensolaris.org Date: Monday, September 28, 2009, 12:36 PM After tweaking the config a little, yes. I can boot with pygrub using xm create and the serial console does work. No network, but it''s also not defined with one so I expected that. At least I can get to the console now though. What''s different about loading it this way? Is there some way to convert this setup to run under virsh? -- This message posted from opensolaris.org _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
While the VM running from xm create is going, I dumped the XML with "virsh dumpxml" and was able to use the config to "virsh define" a working copy. The console now works properly. However, I still don''t get networking. I tried using "virsh attach-interface ubuntu-pv bridge e1000g0". It tells me that the interface was attached, and Ubuntu sees it, but still no connectivity. I tried with checksum offload on and off in Ubuntu via ethtool, no change. -- This message posted from opensolaris.org
I am a kind out of ideas . With usual (linux) xen bridging this trick works up to 9.10. Concept of VNIC on Nevada i never understood, so have nothing to forget. --- On Mon, 9/28/09, Travis Tabbal <travis@tabbal.net> wrote: From: Travis Tabbal <travis@tabbal.net> Subject: Re: [xen-discuss] Network issues in Ubuntu PV To: xen-discuss@opensolaris.org Date: Monday, September 28, 2009, 1:54 PM While the VM running from xm create is going, I dumped the XML with "virsh dumpxml" and was able to use the config to "virsh define" a working copy. The console now works properly. However, I still don''t get networking. I tried using "virsh attach-interface ubuntu-pv bridge e1000g0". It tells me that the interface was attached, and Ubuntu sees it, but still no connectivity. I tried with checksum offload on and off in Ubuntu via ethtool, no change. -- This message posted from opensolaris.org _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
Thanks for the ideas. I decided to try building my own kernel as I couldn''t think of anything else. I can boot it, but I get the same results, no network. It never receives any packets according to ifconfig. Perhaps I''ll try a centOS install just to see if that works as it''s supposed to. -- This message posted from opensolaris.org
OK, I don''t know if I should be irritated or what. I tried installing CentOS 5.3 and the installer starts up, but it is unable to get an address either. DHCP is running on the network and it works fine. I can get addresses in HVM mode, but not PV mode. Any ideas? Seems like a general problem with PV networking now. Here''s the command line I tried from the OpenSolaris dom0. I''m on b123 right now. virt-install --paravirt --name centos --ram 2048 --vnc --os-type=linux --os-variant=fedora8 --network bridge --disk path=/rpool/xen/centos/diskimg,size=10,driver=phy,subdriver=zvol --location http://mirror.rackspace.com/CentOS/5.3/os/x86_64/ -- This message posted from opensolaris.org
On Mon, Sep 28, 2009 at 01:26:55PM -0700, Travis Tabbal wrote:> OK, I don''t know if I should be irritated or what. I tried installing CentOS 5.3 and the installer starts up, but it is unable to get an address either. DHCP is running on the network and it works fine. I can get addresses in HVM mode, but not PV mode. Any ideas? Seems like a general problem with PV networking now. > > Here''s the command line I tried from the OpenSolaris dom0. I''m on b123 right now.I don''t have any ideas, but CentOS 5.3 PV definitely works. Perhaps there''s a problem with e1000g0 on your NIC. Are you positive there''s no other client with that MAC address? Do you see any traffic on the auto-created VNIC in dom0 for that guest at all (either ingoing or outgoing) ? In particular, do you see any outgoing DHCPDISCOVER packets at all? regards john
I suppose it could be something going on with the network card or driver. I have checked the mac addresses, nothing shows up on the network with the same address. I used the exact same command line, changing --paravirt to --hvm and it works fine. I tried using tcpdump to capture DHCP traffic. I can see traffic with --hvm, but not with --paravirt. "tcpdump -D" only shows the e1000g0 interface. Is there a way to see traffic on the interfaces used by Xen? The NIC is an Intel EXPI9301CT. It''s been working fine. I suppose the driver could have an issue with PV VMs. I don''t know enough about how Xen handles networks to really debug that. I do have an onboard NVidia ethernet I could try to connect up. -- This message posted from opensolaris.org
On Mon, Sep 28, 2009 at 02:44:45PM -0700, Travis Tabbal wrote:> Is there a way to see traffic on the interfaces used by Xen?Of course: dom0# snoop -d xvm`virsh domid domu-224`_0 ether 0:16:3e:1b:e8:18 Using device xvm32_0 (promiscuous mode) xenbld.SFBay.Sun.COM -> domu-224.SFBay.Sun.COM ICMP Echo request (ID: 197 Sequence number: 0) domu-224.SFBay.Sun.COM -> xenbld.SFBay.Sun.COM ICMP Echo reply (ID: 197 Sequence number: 0) regards john
When I use the snoop command and filter by mac address, I get no traffic. I also see no traffic when filtering by port 67, which I expected, but figured I could check anyway. If I lose the filter, I can see some traffic from my LAN, but it looks to be the same traffic I would see watching the e1000g0 interface, so it looks like that part of the bridge is working anyway. I still don''t seem to be getting any traffic in or out of the VM when in PV mode. -- This message posted from opensolaris.org
On 29 Sep 2009, at 3:50pm, Travis Tabbal wrote:> When I use the snoop command and filter by mac address, I get no > traffic. I also see no traffic when filtering by port 67, which I > expected, but figured I could check anyway. If I lose the filter, I > can see some traffic from my LAN, but it looks to be the same > traffic I would see watching the e1000g0 interface, so it looks like > that part of the bridge is working anyway. I still don''t seem to be > getting any traffic in or out of the VM when in PV mode.Please provide the output of ''/usr/lib/xen/bin/xenstore-ls'' from dom0.
Attached. -- This message posted from opensolaris.org
On 29 Sep 2009, at 6:09pm, Travis Tabbal wrote:> Attached.It all looks fine. It''s not obvious how to proceed other than to instrument one of the backend or frontend drivers. Are you able to do that?
I''m not familiar enough with OpenSolaris or Xen to really know where to begin with that. If there''s a doc somewhere, I''m willing to try looking at it. -- This message posted from opensolaris.org
http://www.xen.org/products/xen_roadmap.html --- On Tue, 9/29/09, Travis Tabbal <travis@tabbal.net> wrote: From: Travis Tabbal <travis@tabbal.net> Subject: Re: [xen-discuss] Network issues in Ubuntu PV To: xen-discuss@opensolaris.org Date: Tuesday, September 29, 2009, 3:04 PM I''m not familiar enough with OpenSolaris or Xen to really know where to begin with that. If there''s a doc somewhere, I''m willing to try looking at it. -- This message posted from opensolaris.org _______________________________________________ xen-discuss mailing list xen-discuss@opensolaris.org
On 29 Sep 2009, at 8:04pm, Travis Tabbal wrote:> I''m not familiar enough with OpenSolaris or Xen to really know where > to begin with that. If there''s a doc somewhere, I''m willing to try > looking at it.Mark pointed out privately that ''kstat xnbo'' in dom0 might provide some useful information. After that you would need to learn about the Solaris kernel debugger (kmdb) and dtrace which can be used together with a copy of the driver source code. It''s not something you could do in an evening, but a few days would have you making some progress.
Disregard entry above. Done by mistake, been out of forum thread. -- This message posted from opensolaris.org
# kstat xnbo module: xnbo instance: 0 name: aux_statistics class: net allocation_failure 0 allocation_success 0 crtime 20263.650918243 csum_hardware 0 csum_software 0 mac_full 0 other_allocation_failure 0 rx_allocb_failed 0 rx_cksum_deferred 0 rx_cpoparea_grown 0 rx_foreign_page 0 rx_notify_deferred 0 rx_notify_sent 0 rx_pageboundary_crossed 0 rx_rsp_notok 0 rx_too_early 0 small_allocation_failure 0 small_allocation_success 0 snaptime 200481.81704789 spurious_intr 0 tx_allocb_failed 0 tx_cksum_no_need 0 tx_notify_deferred 1 tx_notify_sent 33 tx_too_early 34 -- This message posted from opensolaris.org
OK. I updated to b124 to see if that helped, no change. I installed the nfo driver for my onboard ethernet (nge installed, but didn''t talk to the network for some reason). I then configured centos to use nfo0 as the bridge, it works and is installing now. Strange that the e1000g doesn''t work properly with Xen, but the onboard port does. The Intel card is working fine for everything else I''ve thrown at it. The Ubuntu system still doesn''t want to work. It hangs at "configuring network interfaces" now. I''ll just let it run overnight and see if it will get booted at least. I tried switching it to nfo0 as well, but it does the same thing, just hangs there. Owell. Maybe I''ll end up running centos instead for Linux stuff. Or there''s hvm I suppose. How much of a difference does it really make? -- This message posted from opensolaris.org
Take a look at :- http://blog.adventuresinopensolaris.com/2008/07/xvm-pv-of-debian-distros.html I guess it might work same way on Ubuntu 9.04 HVM with debootstrap installed. -- This message posted from opensolaris.org
I''ve succeded with creating Jaunty Server HVM ( via virt-install) and Jaunty Server PV DomU sharing same image with "rge" driver in OSOL 2010-02-124 Dom0. virt-install --hvm --name JauntyHVM --ram 1024 --vnc \ --os-type=linux --os-variant=ubuntu \ --network bridge \ --disk path=/tunk01/disk1,size=15,driver=phy,subdriver=zvol \ --cdrom /export/home/boris/jaunty.iso Jaunty PV DomU profile for the very first load. # cat jaunty.py name="JuantyPV" memory=2048 vcpus=1 bootloader="/usr/lib/xen/bin/pygrub" disk=[''phy:/dev/zvol/dsk/tunk01/disk1,xvda,w!''] vif = [ ''mac=00:16:3e:00:00:00'' ] vfb= [''type=vnc,vncunused=1''] # xm create jaunty.py # vncviewer localhost:0 # virsh dumpxml JuantyPV > JuantyPV.xml Shutdown DomU # virsh define JuantyPV.xml -- This message posted from opensolaris.org
Light weight X-windows system installed on top Ubuntu 9.04 Server PV DomU . Network is OK. Snapshot attached -- This message posted from opensolaris.org
Thanks for the info. I think the biggest problem right now is that my network card or driver doesn''t like pvm guests. It works fine with hvm, but pvm guests can''t seem to get any connectivity. I''ve tried CentOS, Ubuntu, and Debian. 32bit/64bit doesn''t matter. I did get networking to work with the onboard ethernet with the nfo driver (the nge driver doesn''t work properly for my motherboard as of b124) and it does work with the bridge in pvm mode. However, it seems to mess up the ability for the Solaris box to talk to itself when I do that. It''s like the routing gets confused. I ended up disabling the onboard interface to fix it. They are both connected to the same gigabit switch. Not sure why that didn''t work right. I wouldn''t mind using the onboard network port for guest networking if I could get that issue fixed. I run a zone on 10.1.0.22 and the dom0 is on 10.1.0.2, both static IPs. They can''t talk to each other when I enable the other interface. -- This message posted from opensolaris.org
On 22 Oct 2009, at 5:25pm, Travis Tabbal wrote:> Thanks for the info. I think the biggest problem right now is that > my network card or driver doesn''t like pvm guests. It works fine > with hvm, but pvm guests can''t seem to get any connectivity. I''ve > tried CentOS, Ubuntu, and Debian. 32bit/64bit doesn''t matter.What network card(s) do you have? (Sorry if you already answered this.)> I did get networking to work with the onboard ethernet with the nfo > driver (the nge driver doesn''t work properly for my motherboard as > of b124) and it does work with the bridge in pvm mode. However, it > seems to mess up the ability for the Solaris box to talk to itself > when I do that. It''s like the routing gets confused. I ended up > disabling the onboard interface to fix it. They are both connected > to the same gigabit switch. Not sure why that didn''t work right. I > wouldn''t mind using the onboard network port for guest networking if > I could get that issue fixed. I run a zone on 10.1.0.22 and the dom0 > is on 10.1.0.2, both static IPs. They can''t talk to each other when > I enable the other interface. > -- > This message posted from opensolaris.org > _______________________________________________ > xen-discuss mailing list > xen-discuss@opensolaris.org
> What network card(s) do you have? (Sorry if you > already answered this.)I replied via email, but don''t see the reply here on the website. So I''ll repost just in case. Intel EXPI9301CT http://www.buy.com/prod/intel-gigabit-ct-desktop-adapter-pci-express-1-x-rj-45-10-100-1000base/q/loc/101/209389487.html There is also the onboard card that seems to have some support under OpenSolaris. I''m not using it at the moment though. -- This message posted from opensolaris.org
I updated to b125 with no change. I still can''t get networking in a PV domU with the e1000g0 driver. -- This message posted from opensolaris.org
Hmmm.. odd. I figured it out. I forgot that I had enabled jumbo frames on the e1000g0 interface. I disabled them and networking started working in PV domUs. I don''t know why that matters, but it does seem to have fixed the problem. -- This message posted from opensolaris.org
On 23 Oct 2009, at 4:11am, Travis Tabbal wrote:> Hmmm.. odd. I figured it out. I forgot that I had enabled jumbo > frames on the e1000g0 interface. I disabled them and networking > started working in PV domUs. I don''t know why that matters, but it > does seem to have fixed the problem.Our frontend/backend drivers don''t currently handle jumbo frames. If you look at the kernel log (dmesg) you should see some complaints if the MTU is not 1500. Are they missing?
> Our frontend/backend drivers don''t currently handle > jumbo frames. If > you look at the kernel log (dmesg) you should see > some complaints if > the MTU is not 1500. Are they missing?I don''t recall seeing anything like that in the logs, but it''s possible that I missed it or didn''t realize it was important. Perhaps the docs could be updated to highlight this? Perhaps it''s just me that missed it. I had forgotten I had even enabled that option at all. I just happened to notice the MTU size in the ifconfig -a output and figured it would only take a minute to change it. -- This message posted from opensolaris.org