Dear Xen Community, I have been attempting to get a xen domU installed using an NFS root for several days now, and could use a little help. The nfs server is a RHEL4u4 box. The dom 0/u is stock Suse 10 (vmlinux-2.6.16.21-0.8-xen.gz, a xen 3.0.2_09749 it appears). My objective at this point in time is to simply set up a couple of dom0''s in which to test live migrations (on NFS). --------------------------------------- xm create fails as follows: --------------------------------------- ... TCP reno registered NET: Registered protocol family 1 XENBUS: Timeout connecting to devices! IP-Config: Device `eth0'' not found. Starting udevd Creating devices Loading xennet netfront: Initialising virtual ethernet driver. Loading xenblk Loading reiserfs Mounting root 10.35.24.60:/RAID/data/xen/suse10vm2 mount server reported tcp not available, falling back to udp mount: RPC: Remote system error - Network is unreachable umount: /dev: device is busy umount: /dev: device is busy Kernel panic - not syncing: Attempted to kill init! --------------------------------------- eth0 not found and "tcp not available" do look to be problems, not sure how to resolve Because of the internet difficulty, here are my ifconfig and brctl infos: --------------------------------------- bridge name bridge id STP enabled interfaces xenbr0 8000.feffffffffff no vif0.0 peth0 vif28.0 --------------------------------------- eth0 Link encap:Ethernet HWaddr 00:14:22:FF:3A:06 inet addr:10.35.39.37 Bcast:10.35.39.255 Mask:255.255.252.0 inet6 addr: fe80::214:22ff:feff:3a06/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9217426 errors:0 dropped:0 overruns:0 frame:0 TX packets:9592150 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:7788514530 (7427.7 Mb) TX bytes:9787642385 (9334.2 Mb) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:970608 errors:0 dropped:0 overruns:0 frame:0 TX packets:970608 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2303759371 (2197.0 Mb) TX bytes:2303759371 (2197.0 Mb) peth0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING NOARP MTU:1500 Metric:1 RX packets:9220147 errors:0 dropped:0 overruns:0 frame:0 TX packets:9592113 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7826963256 (7464.3 Mb) TX bytes:9826658952 (9371.4 Mb) Base address:0xecc0 Memory:f3ee0000-f3f00000 vif0.0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING NOARP MTU:1500 Metric:1 RX packets:9592152 errors:0 dropped:0 overruns:0 frame:0 TX packets:9217426 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:9787643953 (9334.2 Mb) TX bytes:7788514530 (7427.7 Mb) vif28.0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:3127 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) xenbr0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link UP BROADCAST RUNNING NOARP MTU:1500 Metric:1 RX packets:634624 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:39410700 (37.5 Mb) TX bytes:0 (0.0 b) --------------------------------------- The dom0 can itself reach the net. I''m not quite sure about the vif28.0 entry. This is a transient entry that exists only during the lifespan of the xm create until I later issue the xm destroy. I''m a little confused about the Cambridge notes on this, because I thought the nomenclature was supposed to be vif0.28, but I digress. The config file for the domain I am attempting to create follows: --------------------------------------- # -*- mode: python; -*- #----------------------------------------------------------------------- ----- kernel = "/RAID/data/xen/bootsles10/vmlinuz-xen" ramdisk = "/RAID/data/xen/bootsles10/initrd-xen" #----------------------------------------------------------------------- ----- # Or use domUloader instead of kernel/ramdisk to get kernel from domU FS #bootloader = "/usr/lib/xen/boot/domUlaoder.py" #bootentry = "hda2:/vmlinuz-xen,/initrd-xen" #bootentry = "/boot/vmlinuz-xen,/boot/initrd-xen" #----------------------------------------------------------------------- ----- # The domain build function. Default is ''linux''. #builder=''linux'' #----------------------------------------------------------------------- ----- memory = 2048 name = "vm2" vcpus = 1 #----------------------------------------------------------------------- ----- # Define network interfaces. # vif = [ '''' ] # vif = [ ''mac=00:16:3e:00:00:01, bridge=xenbr0'' ] # vif = [ '''', ''bridge=xenbr1'' ] # vif = [ '''' ] vif = [ ''mac=00:16:3e:01:00:11, bridge=xenbr0'' ] #----------------------------------------------------------------------- ----- #disk = [ ''phy:hda1,hda1,w'' ] #----------------------------------------------------------------------- ----- #vtpm = [ ''instance=1,backend=0'' ] #----------------------------------------------------------------------- ----- #dhcp="dhcp" ip="10.35.37.31" netmask="255.255.252.0" gateway="10.35.36.1" hostname= "vm2" # Set root device. #root = "/dev/hda1 ro" root = "/dev/nfs" nfs_server = ''10.35.24.60'' nfs_root = ''/RAID/data/xen/suse10vm2'' # Sets runlevel 4. extra = "4" #----------------------------------------------------------------------- ----- # "destroy", meaning that the domain is cleaned up as normal; # "restart", meaning that a new domain is started in place of the old # one; # "preserve", meaning that no clean-up is done until the domain is # manually destroyed (using xm destroy, for example); or # "rename-restart", meaning that the old domain is not cleaned up, but is # renamed and a new domain started in its place. on_poweroff = ''preserve'' on_reboot = ''restart'' on_crash = ''restart'' _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sat, 2007-02-10 at 09:37 -0800, Kraska, Joe A (US SSA) wrote:> Dear Xen Community, > > I have been attempting to get a xen domU installed using > an NFS root for several days now, and could use a little help. > > The nfs server is a RHEL4u4 box. > > The dom 0/u is stock Suse 10 (vmlinux-2.6.16.21-0.8-xen.gz, a xen > 3.0.2_09749 it appears). > > My objective at this point in time is to simply set up a > couple of dom0''s in which to test live migrations (on NFS). >Check the output of rpcinfo on the nfs server box, and be sure NFS''s locking port isn''t firewalled. This changes (by kernel default) every time you re-start NFS. NFS Can be a pain to firewall if left random. Be sure each server''s ip -> hostname is in each other corresponding NFS servers /etc/hosts, if using tcp wrappers be sure to adjust them accordingly. This looks to be more of a nfs-misconfig, see below :> --------------------------------------- > xm create fails as follows: > --------------------------------------- > ... > TCP reno registered > NET: Registered protocol family 1 > XENBUS: Timeout connecting to devices! > IP-Config: Device `eth0'' not found. > Starting udevd > Creating devices > Loading xennet > netfront: Initialising virtual ethernet driver. > Loading xenblk > Loading reiserfs> Mounting root 10.35.24.60:/RAID/data/xen/suse10vm2 > mount server reported tcp not available, falling back to udp > mount: RPC: Remote system error - Network is unreachable^^^^ This tells me you have the locking port firewalled on the nfs server, or egress on it (on dom-0) blocked.> umount: /dev: device is busy > umount: /dev: device is busy > Kernel panic - not syncing: Attempted to kill init! > --------------------------------------- > eth0 not found and "tcp not available" do look to be problems, not sure > how to resolve > > Because of the internet difficulty, here are my ifconfig and brctl > infos:Double check /etc/hosts on the file server, it should look like this 10.x.x.x nfs1.mydomain.com nfs1 10.x.x.x dom01.mydomain.com dom01 10.x.x.x dom02,mydomain.com dom02 Be sure /etc/exports refers to the connecting servers by the name you gave them in /etc/hosts. On dom-0, be sure /etc/hosts ''knows'' about your file server. I really think (if nfs is setup right) its your locking port. If this does turn out to be a Xen issue, please post back :) Best, --Tim _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Check the output of rpcinfo on the nfs server box, and be sure NFS''s > locking port isn''t firewalled.No firewall is running on dom0, domU, or the nfs server.> Be sure each server''s ip -> hostname is in each other correspondingNFS> servers /etc/hosts, if using tcp wrappers be sure to adjust them > accordingly.Don''t understand why either the client or the server need to know each other''s names. My nfs config is as follows: /RAID 10.35.24.0/24(rw,insecure,sync,no_wdelay,no_root_squash) 10.35.36.0/22(rw,insecure,sync,no_wdelay,no_root_squash) titan(rw,insecure,sync,no_wdelay,no_root_squash) 134.120.102.14(rw,insecure,sync,no_wdelay,no_root_squash) So, if you see the 10.35.36.0/22 line, I''m opening the share to every address on that 22 bit network without restriction. dom0 successfully has this nfs mount point mounted, as do plethora of many other hosts, all without any agreement on names.> ^^^^ This tells me you have the locking port firewalled on the nfs > server, or egress on it (on dom-0) blocked.No firewall, no SELinux or similar on either end.> Be sure /etc/exports refers to the connecting servers by the name you > gave them in /etc/hosts. > > On dom-0, be sure /etc/hosts ''knows'' about your file server.Evil brute force should obviate the need for this, yes? BTW, the root on the nfs share was a carbon copy of the dom0, not yet configged with even so much as an IP or hostname, which I should expect would be problems eventually, but we can agree we''re not getting that far, yes? Joe. (note duplicate email to sender and list; suggest carboning list to salt future searches). _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users