Hi, Now that Keir has fixed xend''s event channel device file, I''m getting a little further creating a second domain in Xen. I''m trying to use an NFS root file system from DOM0 to avoid the pain of re-partitioning my disk. I''ve filled /tmp/root with enough packages to chroot into it, and a writable NFS server is running. Things seem to start OK: # xc_dom_create.py Parsing config file ''/etc/xc/defaults'' VM image : "/boot/xenolinux.gz" VM ramdisk : "" VM memory (MB) : "64" VM IP address(es) : "169.254.1.1" VM block device(s) : "" VM cmdline : "ip=169.254.1.1:169.254.1.0:192.168.1.1:255.255.255.0::eth0:off root=/dev/nfs nfsroot=/tmp/root " VM started in domain 7. Console I/O available on TCP port 9600. For a bit "xc_dom_control.py list" includes the new domain, but soon it disappears. xen_dmesg.py only reports the killing the domain and releasing the task. I get no output at all from "console_client.py 168.254.1.0 9600", or with DOM1''s IP address. Am I doing something obviously wrong, or are there any more diagnostics I can try please? Thanks, Sean. -- Sean Atkinson <sean@netproject.com> Netproject ------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> For a bit "xc_dom_control.py list" includes the new domain, but soon it > disappears. xen_dmesg.py only reports the killing the domain and > releasing the task. I get no output at all from "console_client.py > 168.254.1.0 9600", or with DOM1''s IP address. > > Am I doing something obviously wrong, or are there any more diagnostics > I can try please?We need to keep consoel output after a domain dies, but currently we don''t. You can cause xc_dom_create.py to automatically turn into the console client by sepcifying ''-c'' on the command line, or by adding ''auto_console = True'' to your defaults file. -- Keir ------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Hi,> We need to keep consoel output after a domain dies, but currently we > don''t. > > You can cause xc_dom_create.py to automatically turn into the console > client by sepcifying ''-c'' on the command line, or by adding > ''auto_console = True'' to your defaults file.Thanks - that does the trick. Now I can see the kernel boot, and can read the errors it has mounting the root NFS filesystem: Looking up port of RPC 100003/2 on 169.254.1.0 RPC: sendmsg returned error 13 portmap: RPC call returned error 13 Root-NFS: Unable to get nfsd port number from server, using default Looking up port of RPC 100005/1 on 169.254.1.0 RPC: sendmsg returned error 13 portmap: RPC call returned error 13 Root-NFS: Unable to get mountd port number from server, using default RPC: sendmsg returned error 13 mount: RPC call returned error 13 Root-NFS: Server returned error -13 while mounting /tmp/root VFS: Unable to mount root fs via NFS, trying floppy. root_device_name = nfs kmod: failed to exec /sbin/modprobe -s -k block-major-2, errno = 2 VFS: Cannot open root device "nfs" or 02:00 Please append a correct "root=" boot option Kernel panic: VFS: Unable to mount root fs on 02:00 There are no reports of any connection attempts to the NFS server, as there are when I test the mount locally. The portmap service is running, I''ve disabled any iptables, and run xen_nat_enable. Has anybody seen this one before? Thanks, Sean. -- Sean Atkinson <sean@netproject.com> Netproject ------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> There are no reports of any connection attempts to the NFS server, as > there are when I test the mount locally. The portmap service is > running, I''ve disabled any iptables, and run xen_nat_enable.I''d run tcpdump in domain 0 and see if you can see any packets at all. You shouldn''t need to run all of xen_nat_enable -- the only bit you should need is the ''ifconfig eth0:xen 169.254.1.0'' alias. What IP address are you giving the new domain? (I guess 169.254.1.1). What''s the kernel command line look like? You might want to try configuring dom 0 and the new domain with some over subnet e.g. 10.10.10.1/2 just to check that its not something screwy with 169.254.x.x, which is treated slightly differently to ensure packets can not escape the VMM. Cheers, Ian ------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Hi,> I''d run tcpdump in domain 0 and see if you can see any packets at > all.Didn''t see anything from tcpdump.> What IP address are you giving the new domain? (I guess > 169.254.1.1). What''s the kernel command line look like? > > You might want to try configuring dom 0 and the new domain with > some over subnet e.g. 10.10.10.1/2 just to check that its not > something screwy with 169.254.x.x, which is treated slightly > differently to ensure packets can not escape the VMM.I was indeed using 169.254.1.0/1, and moving to your suggested subnet fixed the networking so DOM1 boots fine off an NFS root. Unfortunately that root had a broken distribution, so I''ve repartitioned after all to add a clean installation to a new partition, and perhaps I''ll try the VBD/VD stuff for better performance too. Thanks, Sean. -- Sean Atkinson <sean@netproject.com> Netproject ------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
> > You might want to try configuring dom 0 and the new domain with > > some over subnet e.g. 10.10.10.1/2 just to check that its not > > something screwy with 169.254.x.x, which is treated slightly > > differently to ensure packets can not escape the VMM. > > I was indeed using 169.254.1.0/1, and moving to your suggested subnet > fixed the networking so DOM1 boots fine off an NFS root. > > Unfortunately that root had a broken distribution, so I''ve repartitioned > after all to add a clean installation to a new partition, and perhaps > I''ll try the VBD/VD stuff for better performance too.Please can you try backing out the following change and see if that fixes it for the 169.254.x.x case: http://xen.bkbits.net:8080/xeno-unstable.bk/diffs/xen/net/dev.c@1.82?nav=index.html|src/.|src/xen|src/xen/net|hist/xen/net/dev.c I suspect we''re somehow killing 169.254.x.x packets even if they''re within the VMM rather than destined for the wire. Thanks, Ian ------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel
Hi,> Please can you try backing out the following change and see if > that fixes it for the 169.254.x.x case: > > http://xen.bkbits.net:8080/xeno-unstable.bk/diffs/xen/net/dev.c@1.82?nav=index.html|src/.|src/xen|src/xen/net|hist/xen/net/dev.c > > I suspect we''re somehow killing 169.254.x.x packets even if > they''re within the VMM rather than destined for the wire.Only just got around to testing this, but I''m afraid I see exactly the same RPC errors as without the patch. I note that this is different behaviour than if no NFS server runs, when things just stick looking up RPC ports. Also if I boot from my new root disk partition with the default subnet details, DOM0 receives no ping replies from 169.254.1.1 and DOM1 complains about a broadcast address trying to ping 169.254.1.0, and still receives nothing if I use "-b" to force broadcast ping. Seems it''s to do with DOM0''s 169.254.1.0 address, but then how does that work for others? Cheers, Sean. -- Sean Atkinson <sean@netproject.com> Netproject ------------------------------------------------------- This SF.Net email is sponsored by: IBM Linux Tutorials Free Linux tutorial presented by Daniel Robbins, President and CEO of GenToo technologies. Learn everything from fundamentals to system administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click _______________________________________________ Xen-devel mailing list Xen-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/xen-devel