Hi, I am trying out Xen for the first time and I am having a few problems with getting it working. The computer is a quad core Intel Xeon with VT enabled, 8gb of RAM and 2 x 15,000rpm SAS drives in RAID1. I have installed CentOS 5 64bit and installed Xen 3.3.0 via yum. I have successfully booted in to dom0. Here is my grub.conf on dom0:> # grub.conf generated by anaconda > # > # Note that you do not have to rerun grub after making changes to this > file > # NOTICE: You have a /boot partition. This means that > # all kernel and initrd paths are relative to /boot/, eg. > # root (hd0,0) > # kernel /vmlinuz-version ro root=/dev/sda2 > # initrd /initrd-version.img > #boot=/dev/sda > default=0 > timeout=5 > splashimage=(hd0,0)/grub/splash.xpm.gz > hiddenmenu > title CentOS (2.6.18-92.1.22.el5xen) > root (hd0,0) > kernel /xen.gz-3.3.0 > module /vmlinuz-2.6.18-92.1.22.el5xen ro root=LABEL=/ > module /initrd-2.6.18-92.1.22.el5xen.img > title CentOS (2.6.18-92.el5) > root (hd0,0) > kernel /vmlinuz-2.6.18-92.el5 ro root=LABEL=/ > initrd /initrd-2.6.18-92.el5.imgHowever, I can''t for the life of me install a guest domain. I have been Googling for the last 3 days and I am extremely confused - it seems like there are multiple ways to do it but none are working for me. I am wanting to use file-based HDD''s for the guests, and I want to install off an iso of CentOS5 on my HDD for now. First I tried using the "virt-install" script. Firstly, should I be able to install a fully virtualized guest or will only para-virtualized work? I will show what happens when I try both. Firstly, if I try to install it as a paravirtualized guest I am running this command:> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 > --location=/vm/CentOS-5.2-x86_64-bin-DVD.isoThis creates the domain fine and starts what I assume is the CentOS installation - it asks me to first select a language, once I have done that it says "What type of media contains the packages to be installed?" and gives me a list of Local CDROM, Hard drive, NFS, FTP and HTTP. What is this asking me for? If it has already started the installation then surely it knows where to get the packages from? Anyway, if I select Local CDROM it says "Unable to find any devices of the type needed for this installation type. Would you like to manually select your driver or use a driver disk?" I have got no idea what to do from here. If I try to install a fully virtualized guest using virt-install, here is the command I am running:> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 > --location=/vm/CentOS-5.2-x86_64-bin-DVD.iso --hvmThis comes up and it just hangs here:> Starting install... > Creating storage file... 100% |=========================| 5.0 GB 00:00 > Creating domain... 0 B 00:00 > ▒My xend-debug.log file says this: XendInvalidDomain: <Fault 3: ''b5e19b10-7540-902c-b585-f8783447521f''> Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 140, in process resource = self.getResource() File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 172, in getResource return self.getServer().getResource(self) File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 351, in getResource return self.root.getRequestResource(req) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 39, in getRequestResource return findResource(self, req) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 26, in findResource next = resource.getPathResource(pathElement, request) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 49, in getPathResource val = self.getChild(path, request) File "/usr/lib64/python2.4/site-packages/xen/web/SrvDir.py", line 71, in getChild val = self.get(x) File "/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", line 52, in get return self.domain(x) File "/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", line 44, in domain dom = self.xd.domain_lookup(x) File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py", line 529, in domain_lookup raise XendInvalidDomain(str(domid)) XendInvalidDomain: <Fault 3: ''test1''> So, now I try doing what looks like the manual way - creating a config file in /etc/xen and using xm create. First I create a file for the HDD:> dd if=/dev/zero of=test1.img bs=1M count=1 seek=1023Then I created this config file and placed it at /etc/xen/test:> # -*- mode: python; -*- > #===========================================================================> # Python configuration setup for ''xm create''. > # This script sets the parameters used when a domain is created using > ''xm create''. > # You use a separate script for each domain you want to create, or > # you can set the parameters for the domain on the xm command line. > #===========================================================================> > #---------------------------------------------------------------------------- > # Kernel image file. > kernel = "/boot/vmlinuz-2.6.18-92.1.22.el5xen" > > # Optional ramdisk. > #ramdisk = "/boot/initrd.gz" > ramdisk = "/boot/initrd-2.6.18-92.1.22.el5xen.img" > #ramdisk = "/boot/initrd-centos5-xen.img" > > # The domain build function. Default is ''linux''. > #builder=''linux'' > > # Initial memory allocation (in megabytes) for the new domain. > # > # WARNING: Creating a domain with insufficient memory may cause out of > # memory errors. The domain needs enough memory to boot kernel > # and modules. Allocating less than 32MBs is not recommended. > memory = 512 > > # A name for your domain. All domains must have different names. > name = "Test1" > > # 128-bit UUID for the domain. The default behavior is to generate a > new UUID > # on each call to ''xm create''. > #uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9" > > # List of which CPUS this domain is allowed to use, default Xen picks > #cpus = "" # leave to Xen to pick > #cpus = "0" # all vcpus run on CPU0 > #cpus = "0-3,5,^1" # all vcpus run on cpus 0,2,3,5 > #cpus = ["2", "3"] # VCPU0 runs on CPU2, VCPU1 runs on CPU3 > > # Number of Virtual CPUS to use, default is 1 > #vcpus = 1 > > #---------------------------------------------------------------------------- > # Define network interfaces. > > # By default, no network interfaces are configured. You may have one > created > # with sensible defaults using an empty vif clause: > # > # vif = [ '''' ] > # > # or optionally override backend, bridge, ip, mac, script, type, or > vifname: > # > # vif = [ ''mac=00:16:3e:00:00:11, bridge=xenbr0'' ] > # > # or more than one interface may be configured: > # > # vif = [ '''', ''bridge=xenbr1'' ] > > vif = [ '''' ] > > #---------------------------------------------------------------------------- > # Define the disk devices you want the domain to have access to, and > # what you want them accessible as. > # Each disk entry is of the form phy:UNAME,DEV,MODE > # where UNAME is the device, DEV is the device name the domain will see, > # and MODE is r for read-only, w for read-write. > > #disk = [ ''phy:hda1,hda1,w'' ] > #disk = [ ''file:/vm/test1.img,ioemu:sda1,w'', > ''phy:/dev/cdrom,hdc:cdrom,r'' ] > disk = [ ''file:/vm/test1.img,ioemu:sda1,w'' ] > > #---------------------------------------------------------------------------- > # Define frame buffer device. > # > # By default, no frame buffer device is configured. > # > # To create one using the SDL backend and sensible defaults: > # > # vfb = [ ''type=sdl'' ] > # > # This uses environment variables XAUTHORITY and DISPLAY. You > # can override that: > # > # vfb = [ ''type=sdl,xauthority=/home/bozo/.Xauthority,display=:1'' ] > # > # To create one using the VNC backend and sensible defaults: > # > # vfb = [ ''type=vnc'' ] > # > # The backend listens on 127.0.0.1 port 5900+N by default, where N is > # the domain ID. You can override both address and N: > # > # vfb = [ ''type=vnc,vnclisten=127.0.0.1,vncdisplay=1'' ] > # > # Or you can bind the first unused port above 5900: > # > # vfb = [ ''type=vnc,vnclisten=0.0.0.0,vncunused=1'' ] > # > # You can override the password: > # > # vfb = [ ''type=vnc,vncpasswd=MYPASSWD'' ] > # > # Empty password disables authentication. Defaults to the vncpasswd > # configured in xend-config.sxp. > > #---------------------------------------------------------------------------- > # Define to which TPM instance the user domain should communicate. > # The vtpm entry is of the form ''instance=INSTANCE,backend=DOM'' > # where INSTANCE indicates the instance number of the TPM the VM > # should be talking to and DOM provides the domain where the backend > # is located. > # Note that no two virtual machines should try to connect to the same > # TPM instance. The handling of all TPM instances does require > # some management effort in so far that VM configration files (and thus > # a VM) should be associated with a TPM instance throughout the lifetime > # of the VM / VM configuration file. The instance number must be > # greater or equal to 1. > #vtpm = [ ''instance=1,backend=0'' ] > > #---------------------------------------------------------------------------- > # Set the kernel command line for the new domain. > # You only need to define the IP parameters and hostname if the domain''s > # IP config doesn''t, e.g. in ifcfg-eth0 or via DHCP. > # You can use ''extra'' to set the runlevel and custom environment > # variables used by custom rc scripts (e.g. VMID=, usr= ). > > # Set if you want dhcp to allocate the IP address. > #dhcp="dhcp" > # Set netmask. > #netmask> # Set default gateway. > #gateway> # Set the hostname. > #hostname= "vm%d" % vmid > > # Set root device. > root = "/dev/sda1 ro" > > # Root device for nfs. > #root = "/dev/nfs" > # The nfs server. > #nfs_server = ''192.0.2.1'' > # Root directory on the nfs server. > #nfs_root = ''/full/path/to/root/directory'' > > # Sets runlevel 4. > extra = "4" > > #---------------------------------------------------------------------------- > # Configure the behaviour when a domain exits. There are three ''reasons'' > # for a domain to stop: poweroff, reboot, and crash. For each of these you > # may specify: > # > # "destroy", meaning that the domain is cleaned up as normal; > # "restart", meaning that a new domain is started in place of the old > # one; > # "preserve", meaning that no clean-up is done until the domain is > # manually destroyed (using xm destroy, for example); or > # "rename-restart", meaning that the old domain is not cleaned up, but is > # renamed and a new domain started in its place. > # > # In the event a domain stops due to a crash, you have the additional > options: > # > # "coredump-destroy", meaning dump the crashed domain''s core and then > destroy; > # "coredump-restart'', meaning dump the crashed domain''s core and the > restart. > # > # The default is > # > # on_poweroff = ''destroy'' > # on_reboot = ''restart'' > # on_crash = ''restart'' > # > # For backwards compatibility we also support the deprecated option > restart > # > # restart = ''onreboot'' means on_poweroff = ''destroy'' > # on_reboot = ''restart'' > # on_crash = ''destroy'' > # > # restart = ''always'' means on_poweroff = ''restart'' > # on_reboot = ''restart'' > # on_crash = ''restart'' > # > # restart = ''never'' means on_poweroff = ''destroy'' > # on_reboot = ''destroy'' > # on_crash = ''destroy'' > > #on_poweroff = ''destroy'' > #on_reboot = ''restart'' > #on_crash = ''restart'' > > #----------------------------------------------------------------------------- > # Configure PVSCSI devices: > # > #vscsi=[ ''PDEV, VDEV'' ] > # > # PDEV gives physical SCSI device to be attached to specified guest > # domain by one of the following identifier format. > # - XX:XX:XX:XX (4-tuples with decimal notation which shows > # "host:channel:target:lun") > # - /dev/sdxx or sdx > # - /dev/stxx or stx > # - /dev/sgxx or sgx > # - result of ''scsi_id -gu -s''. > # ex. # scsi_id -gu -s /block/sdb > # 36000b5d0006a0000006a0257004c0000 > # > # VDEV gives virtual SCSI device by 4-tuples (XX:XX:XX:XX) as > # which the specified guest domain recognize. > # > > #vscsi = [ ''/dev/sdx, 0:0:0:0'' ] > > #===========================================================================I then ran this command:> xm create -c test1And this is the last few lines of the output before it stops:> Scanning and configuring dmraid supported devices > Creating root device. > Mounting root filesystem. > mount: could not find filesystem ''/dev/root'' > Setting up other filesystems. > Setting up new root fs > setuproot: moving /dev failed: No such file or directory > no fstab.sys, mounting internal defaults > setuproot: error mounting /proc: No such file or directory > setuproot: error mounting /sys: No such file or directory > Switching to new root and running init. > unmounting old /dev > unmounting old /proc > unmounting old /sys > switchroot: mount failed: No such file or directory > Kernel panic - not syncing: Attempted to kill init!So I am utterly stumped and extremely frustrated by now that I cannot get something that is seemingly simple to work! Any advise and help would be very greatly appreciated! Thanks in advance :) Andrew _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I don't think installation from ISO is supported or generally done. Typically, you'd use xen-tools or 'rinse' directly to do the install for a Centos guest, just like debootstrap is used for debian based guests. Best Regards Nathan Eisenberg Sr. Systems Administrator Atlas Networks, LLC support@atlasnetworks.us http://support.atlasnetworks.us/portal -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Andrew Kilham Sent: Saturday, March 28, 2009 8:17 PM To: xen-users@lists.xensource.com Subject: [Xen-users] Problems installing guest domains Hi, I am trying out Xen for the first time and I am having a few problems with getting it working. The computer is a quad core Intel Xeon with VT enabled, 8gb of RAM and 2 x 15,000rpm SAS drives in RAID1. I have installed CentOS 5 64bit and installed Xen 3.3.0 via yum. I have successfully booted in to dom0. Here is my grub.conf on dom0:> # grub.conf generated by anaconda > # > # Note that you do not have to rerun grub after making changes to this > file > # NOTICE: You have a /boot partition. This means that > # all kernel and initrd paths are relative to /boot/, eg. > # root (hd0,0) > # kernel /vmlinuz-version ro root=/dev/sda2 > # initrd /initrd-version.img > #boot=/dev/sda > default=0 > timeout=5 > splashimage=(hd0,0)/grub/splash.xpm.gz > hiddenmenu > title CentOS (2.6.18-92.1.22.el5xen) > root (hd0,0) > kernel /xen.gz-3.3.0 > module /vmlinuz-2.6.18-92.1.22.el5xen ro root=LABEL=/ > module /initrd-2.6.18-92.1.22.el5xen.img > title CentOS (2.6.18-92.el5) > root (hd0,0) > kernel /vmlinuz-2.6.18-92.el5 ro root=LABEL=/ > initrd /initrd-2.6.18-92.el5.imgHowever, I can't for the life of me install a guest domain. I have been Googling for the last 3 days and I am extremely confused - it seems like there are multiple ways to do it but none are working for me. I am wanting to use file-based HDD's for the guests, and I want to install off an iso of CentOS5 on my HDD for now. First I tried using the "virt-install" script. Firstly, should I be able to install a fully virtualized guest or will only para-virtualized work? I will show what happens when I try both. Firstly, if I try to install it as a paravirtualized guest I am running this command:> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 > --location=/vm/CentOS-5.2-x86_64-bin-DVD.isoThis creates the domain fine and starts what I assume is the CentOS installation - it asks me to first select a language, once I have done that it says "What type of media contains the packages to be installed?" and gives me a list of Local CDROM, Hard drive, NFS, FTP and HTTP. What is this asking me for? If it has already started the installation then surely it knows where to get the packages from? Anyway, if I select Local CDROM it says "Unable to find any devices of the type needed for this installation type. Would you like to manually select your driver or use a driver disk?" I have got no idea what to do from here. If I try to install a fully virtualized guest using virt-install, here is the command I am running:> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 > --location=/vm/CentOS-5.2-x86_64-bin-DVD.iso --hvmThis comes up and it just hangs here:> Starting install... > Creating storage file... 100% |=========================| 5.0 GB 00:00 > Creating domain... 0 B 00:00 > ▒My xend-debug.log file says this: XendInvalidDomain: <Fault 3: 'b5e19b10-7540-902c-b585-f8783447521f'> Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 140, in process resource = self.getResource() File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 172, in getResource return self.getServer().getResource(self) File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 351, in getResource return self.root.getRequestResource(req) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 39, in getRequestResource return findResource(self, req) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 26, in findResource next = resource.getPathResource(pathElement, request) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 49, in getPathResource val = self.getChild(path, request) File "/usr/lib64/python2.4/site-packages/xen/web/SrvDir.py", line 71, in getChild val = self.get(x) File "/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", line 52, in get return self.domain(x) File "/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", line 44, in domain dom = self.xd.domain_lookup(x) File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py", line 529, in domain_lookup raise XendInvalidDomain(str(domid)) XendInvalidDomain: <Fault 3: 'test1'> So, now I try doing what looks like the manual way - creating a config file in /etc/xen and using xm create. First I create a file for the HDD:> dd if=/dev/zero of=test1.img bs=1M count=1 seek=1023Then I created this config file and placed it at /etc/xen/test:> # -*- mode: python; -*- > #===========================================================================> # Python configuration setup for 'xm create'. > # This script sets the parameters used when a domain is created using > 'xm create'. > # You use a separate script for each domain you want to create, or > # you can set the parameters for the domain on the xm command line. > #===========================================================================> > #---------------------------------------------------------------------------- > # Kernel image file. > kernel = "/boot/vmlinuz-2.6.18-92.1.22.el5xen" > > # Optional ramdisk. > #ramdisk = "/boot/initrd.gz" > ramdisk = "/boot/initrd-2.6.18-92.1.22.el5xen.img" > #ramdisk = "/boot/initrd-centos5-xen.img" > > # The domain build function. Default is 'linux'. > #builder='linux' > > # Initial memory allocation (in megabytes) for the new domain. > # > # WARNING: Creating a domain with insufficient memory may cause out of > # memory errors. The domain needs enough memory to boot kernel > # and modules. Allocating less than 32MBs is not recommended. > memory = 512 > > # A name for your domain. All domains must have different names. > name = "Test1" > > # 128-bit UUID for the domain. The default behavior is to generate a > new UUID > # on each call to 'xm create'. > #uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9" > > # List of which CPUS this domain is allowed to use, default Xen picks > #cpus = "" # leave to Xen to pick > #cpus = "0" # all vcpus run on CPU0 > #cpus = "0-3,5,^1" # all vcpus run on cpus 0,2,3,5 > #cpus = ["2", "3"] # VCPU0 runs on CPU2, VCPU1 runs on CPU3 > > # Number of Virtual CPUS to use, default is 1 > #vcpus = 1 > > #---------------------------------------------------------------------------- > # Define network interfaces. > > # By default, no network interfaces are configured. You may have one > created > # with sensible defaults using an empty vif clause: > # > # vif = [ '' ] > # > # or optionally override backend, bridge, ip, mac, script, type, or > vifname: > # > # vif = [ 'mac=00:16:3e:00:00:11, bridge=xenbr0' ] > # > # or more than one interface may be configured: > # > # vif = [ '', 'bridge=xenbr1' ] > > vif = [ '' ] > > #---------------------------------------------------------------------------- > # Define the disk devices you want the domain to have access to, and > # what you want them accessible as. > # Each disk entry is of the form phy:UNAME,DEV,MODE > # where UNAME is the device, DEV is the device name the domain will see, > # and MODE is r for read-only, w for read-write. > > #disk = [ 'phy:hda1,hda1,w' ] > #disk = [ 'file:/vm/test1.img,ioemu:sda1,w', > 'phy:/dev/cdrom,hdc:cdrom,r' ] > disk = [ 'file:/vm/test1.img,ioemu:sda1,w' ] > > #---------------------------------------------------------------------------- > # Define frame buffer device. > # > # By default, no frame buffer device is configured. > # > # To create one using the SDL backend and sensible defaults: > # > # vfb = [ 'type=sdl' ] > # > # This uses environment variables XAUTHORITY and DISPLAY. You > # can override that: > # > # vfb = [ 'type=sdl,xauthority=/home/bozo/.Xauthority,display=:1' ] > # > # To create one using the VNC backend and sensible defaults: > # > # vfb = [ 'type=vnc' ] > # > # The backend listens on 127.0.0.1 port 5900+N by default, where N is > # the domain ID. You can override both address and N: > # > # vfb = [ 'type=vnc,vnclisten=127.0.0.1,vncdisplay=1' ] > # > # Or you can bind the first unused port above 5900: > # > # vfb = [ 'type=vnc,vnclisten=0.0.0.0,vncunused=1' ] > # > # You can override the password: > # > # vfb = [ 'type=vnc,vncpasswd=MYPASSWD' ] > # > # Empty password disables authentication. Defaults to the vncpasswd > # configured in xend-config.sxp. > > #---------------------------------------------------------------------------- > # Define to which TPM instance the user domain should communicate. > # The vtpm entry is of the form 'instance=INSTANCE,backend=DOM' > # where INSTANCE indicates the instance number of the TPM the VM > # should be talking to and DOM provides the domain where the backend > # is located. > # Note that no two virtual machines should try to connect to the same > # TPM instance. The handling of all TPM instances does require > # some management effort in so far that VM configration files (and thus > # a VM) should be associated with a TPM instance throughout the lifetime > # of the VM / VM configuration file. The instance number must be > # greater or equal to 1. > #vtpm = [ 'instance=1,backend=0' ] > > #---------------------------------------------------------------------------- > # Set the kernel command line for the new domain. > # You only need to define the IP parameters and hostname if the domain's > # IP config doesn't, e.g. in ifcfg-eth0 or via DHCP. > # You can use 'extra' to set the runlevel and custom environment > # variables used by custom rc scripts (e.g. VMID=, usr= ). > > # Set if you want dhcp to allocate the IP address. > #dhcp="dhcp" > # Set netmask. > #netmask> # Set default gateway. > #gateway> # Set the hostname. > #hostname= "vm%d" % vmid > > # Set root device. > root = "/dev/sda1 ro" > > # Root device for nfs. > #root = "/dev/nfs" > # The nfs server. > #nfs_server = '192.0.2.1' > # Root directory on the nfs server. > #nfs_root = '/full/path/to/root/directory' > > # Sets runlevel 4. > extra = "4" > > #---------------------------------------------------------------------------- > # Configure the behaviour when a domain exits. There are three 'reasons' > # for a domain to stop: poweroff, reboot, and crash. For each of these you > # may specify: > # > # "destroy", meaning that the domain is cleaned up as normal; > # "restart", meaning that a new domain is started in place of the old > # one; > # "preserve", meaning that no clean-up is done until the domain is > # manually destroyed (using xm destroy, for example); or > # "rename-restart", meaning that the old domain is not cleaned up, but is > # renamed and a new domain started in its place. > # > # In the event a domain stops due to a crash, you have the additional > options: > # > # "coredump-destroy", meaning dump the crashed domain's core and then > destroy; > # "coredump-restart', meaning dump the crashed domain's core and the > restart. > # > # The default is > # > # on_poweroff = 'destroy' > # on_reboot = 'restart' > # on_crash = 'restart' > # > # For backwards compatibility we also support the deprecated option > restart > # > # restart = 'onreboot' means on_poweroff = 'destroy' > # on_reboot = 'restart' > # on_crash = 'destroy' > # > # restart = 'always' means on_poweroff = 'restart' > # on_reboot = 'restart' > # on_crash = 'restart' > # > # restart = 'never' means on_poweroff = 'destroy' > # on_reboot = 'destroy' > # on_crash = 'destroy' > > #on_poweroff = 'destroy' > #on_reboot = 'restart' > #on_crash = 'restart' > > #----------------------------------------------------------------------------- > # Configure PVSCSI devices: > # > #vscsi=[ 'PDEV, VDEV' ] > # > # PDEV gives physical SCSI device to be attached to specified guest > # domain by one of the following identifier format. > # - XX:XX:XX:XX (4-tuples with decimal notation which shows > # "host:channel:target:lun") > # - /dev/sdxx or sdx > # - /dev/stxx or stx > # - /dev/sgxx or sgx > # - result of 'scsi_id -gu -s'. > # ex. # scsi_id -gu -s /block/sdb > # 36000b5d0006a0000006a0257004c0000 > # > # VDEV gives virtual SCSI device by 4-tuples (XX:XX:XX:XX) as > # which the specified guest domain recognize. > # > > #vscsi = [ '/dev/sdx, 0:0:0:0' ] > > #===========================================================================I then ran this command:> xm create -c test1And this is the last few lines of the output before it stops:> Scanning and configuring dmraid supported devices > Creating root device. > Mounting root filesystem. > mount: could not find filesystem '/dev/root' > Setting up other filesystems. > Setting up new root fs > setuproot: moving /dev failed: No such file or directory > no fstab.sys, mounting internal defaults > setuproot: error mounting /proc: No such file or directory > setuproot: error mounting /sys: No such file or directory > Switching to new root and running init. > unmounting old /dev > unmounting old /proc > unmounting old /sys > switchroot: mount failed: No such file or directory > Kernel panic - not syncing: Attempted to kill init!So I am utterly stumped and extremely frustrated by now that I cannot get something that is seemingly simple to work! Any advise and help would be very greatly appreciated! Thanks in advance :) Andrew _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Nathan, Thanks, I tried installing xen-tools and rinse and using that but it is still not working. I installed rinse etc, ran this command:> xen-create-image --hostname=test1 --memory=512 --passwd --dir=/vm > --install-method=rinse --ip=10.0.0.81 --gateway=10.0.0.1 --dist=centos-5Which did everything fine, created the HDD files, downloaded all the packages etc etc fine. I then tried running "xm create -c test1" and it first said:> File /vm/domains/test1/disk.img is loopback-mounted through /dev/loop0, > which is mounted in the privileged domain, > and so cannot be mounted by a guest.I tried running "umount /vm/domains/test1/disk.img" which unmounted it and then ran the xm create command again, and got the same error that had been getting when trying to create the domain manually:> [root@localhost xen]# xm create -c test1.cfg > Using config file "./test1.cfg". > Started domain test1 > ing. > selinux_register_security: Registering secondary module capability > Capability LSM initialized as secondary > Mount-cache hash table entries: 256 > , L1 D cache: 32K > CPU: Physical Processor ID: 0 > CPU: Processor Core ID: 1 > (SMP-)alternatives turned off > Brought up 1 CPUs > checking if image is initramfs... it is > Grant table initialized > NET: Registered protocol family 16 > ACPI Exception (utmutex-0262): AE_BAD_PARAMETER, Thread 3C7A0 could > not acquire Mutex [2] [20060707] > No dock devices found. > ACPI Exception (utmutex-0262): AE_BAD_PARAMETER, Thread 3C7A0 could > not acquire Mutex [2] [20060707] > Brought up 1 CPUs > PCI: Fatal: No PCI config space access function found > PCI: setting up Xen PCI frontend stub > ACPI: Interpreter disabled. > Linux Plug and Play Support v0.97 (c) Adam Belay > pnp: PnP ACPI: disabled > xen_mem: Initialising balloon driver. > usbcore: registered new driver usbfs > usbcore: registered new driver hub > PCI: System does not support PCI > PCI: System does not support PCI > NetLabel: Initializing > NetLabel: domain hash size = 128 > NetLabel: protocols = UNLABELED CIPSOv4 > NetLabel: unlabeled traffic allowed by default > NET: Registered protocol family 2 > IP route cache hash table entries: 32768 (order: 6, 262144 bytes) > TCP established hash table entries: 131072 (order: 9, 2097152 bytes) > TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) > TCP: Hash tables configured (established 131072 bind 65536) > TCP reno registered > audit: initializing netlink socket (disabled) > audit(1238411873.141:1): initialized > VFS: Disk quotas dquot_6.5.1 > Dquot-cache hash table entries: 512 (order 0, 4096 bytes) > Initializing Cryptographic API > ksign: Installing public key data > Loading keyring > - Added public key 6D65AF37871D9CBE > - User ID: CentOS (Kernel Module GPG key) > io scheduler noop registered > io scheduler anticipatory registered > io scheduler deadline registered > io scheduler cfq registered (default) > pci_hotplug: PCI Hot Plug PCI Core version: 0.5 > rtc: IRQ 8 is not free. > Non-volatile memory driver v1.2 > Linux agpgart interface v0.101 (c) Dave Jones > RAMDISK driver initialized: 16 RAM disks of 16384K size 4096 blocksize > Xen virtual console successfully installed as xvc0 > Event-channel device installed. > Uniform Multi-Platform E-IDE driver Revision: 7.00alpha2 > ide: Assuming 50MHz system bus speed for PIO modes; override with > idebus=xx > ide-floppy driver 0.99.newide > usbcore: registered new driver hiddev > usbcore: registered new driver usbhid > drivers/usb/input/hid-core.c: v2.6:USB HID core driver > PNP: No PS/2 controller found. Probing ports directly. > i8042.c: No controller found. > mice: PS/2 mouse device common for all mice > md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27 > md: bitmap version 4.39 > TCP bic registered > Initializing IPsec netlink socket > NET: Registered protocol family 1 > NET: Registered protocol family 17 > XENBUS: Device with no driver: device/vbd/2050 > XENBUS: Device with no driver: device/vbd/2049 > XENBUS: Device with no driver: device/vif/0 > XENBUS: Device with no driver: device/console/0 > Write protecting the kernel read-only data: 461k > Red Hat nash version 5.1.19.6 starting > Mounting proc filesystem > Mounting sysfs filesystem > Creating /dev > Creating initial device nodes > Setting up hotplug. > Creating block device nodes. > Loading ehci-hcd.ko module > Loading ohci-hcd.ko module > Loading uhci-hcd.ko module > USB Universal Host Controller Interface driver v3.0 > Loading jbd.ko module > Loading ext3.ko module > Loading scsi_mod.ko module > SCSI subsystem initialized > Loading sd_mod.ko module > Loading scsi_transport_sas.ko module > Loading mptbase.ko module > Fusion MPT base driver 3.04.05 > Copyright (c) 1999-2007 LSI Corporation > Loading mptscsih.ko module > Loading mptsas.ko module > Fusion MPT SAS Host driver 3.04.05 > Loading libata.ko module > Loading ata_piix.ko module > Waiting for driver initialization. > Scanning and configuring dmraid supported devices > Creating root device. > Mounting root filesystem. > mount: could not find filesystem ''/dev/root'' > Setting up other filesystems. > Setting up new root fs > setuproot: moving /dev failed: No such file or directory > no fstab.sys, mounting internal defaults > setuproot: error mounting /proc: No such file or directory > setuproot: error mounting /sys: No such file or directory > Switching to new root and running init. > unmounting old /dev > unmounting old /proc > unmounting old /sys > switchroot: mount failed: No such file or directory > Kernel panic - not syncing: Attempted to kill init!So I''m back to square one! Cheers, Andrew Nathan Eisenberg wrote:> I don''t think installation from ISO is supported or generally done. Typically, you''d use xen-tools or ''rinse'' directly to do the install for a Centos guest, just like debootstrap is used for debian based guests. > > Best Regards > Nathan Eisenberg > Sr. Systems Administrator > Atlas Networks, LLC > support@atlasnetworks.us > http://support.atlasnetworks.us/portal > > -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Andrew Kilham > Sent: Saturday, March 28, 2009 8:17 PM > To: xen-users@lists.xensource.com > Subject: [Xen-users] Problems installing guest domains > > Hi, > > I am trying out Xen for the first time and I am having a few problems > with getting it working. The computer is a quad core Intel Xeon with VT > enabled, 8gb of RAM and 2 x 15,000rpm SAS drives in RAID1. > > I have installed CentOS 5 64bit and installed Xen 3.3.0 via yum. I have > successfully booted in to dom0. Here is my grub.conf on dom0: > > >> # grub.conf generated by anaconda >> # >> # Note that you do not have to rerun grub after making changes to this >> file >> # NOTICE: You have a /boot partition. This means that >> # all kernel and initrd paths are relative to /boot/, eg. >> # root (hd0,0) >> # kernel /vmlinuz-version ro root=/dev/sda2 >> # initrd /initrd-version.img >> #boot=/dev/sda >> default=0 >> timeout=5 >> splashimage=(hd0,0)/grub/splash.xpm.gz >> hiddenmenu >> title CentOS (2.6.18-92.1.22.el5xen) >> root (hd0,0) >> kernel /xen.gz-3.3.0 >> module /vmlinuz-2.6.18-92.1.22.el5xen ro root=LABEL=/ >> module /initrd-2.6.18-92.1.22.el5xen.img >> title CentOS (2.6.18-92.el5) >> root (hd0,0) >> kernel /vmlinuz-2.6.18-92.el5 ro root=LABEL=/ >> initrd /initrd-2.6.18-92.el5.img >> > > However, I can''t for the life of me install a guest domain. I have been > Googling for the last 3 days and I am extremely confused - it seems like > there are multiple ways to do it but none are working for me. > > I am wanting to use file-based HDD''s for the guests, and I want to > install off an iso of CentOS5 on my HDD for now. > > > First I tried using the "virt-install" script. Firstly, should I be able > to install a fully virtualized guest or will only para-virtualized work? > I will show what happens when I try both. > > Firstly, if I try to install it as a paravirtualized guest I am running > this command: > > >> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 >> --location=/vm/CentOS-5.2-x86_64-bin-DVD.iso >> > > This creates the domain fine and starts what I assume is the CentOS > installation - it asks me to first select a language, once I have done > that it says "What type of media contains the packages to be installed?" > and gives me a list of Local CDROM, Hard drive, NFS, FTP and HTTP. What > is this asking me for? If it has already started the installation then > surely it knows where to get the packages from? Anyway, if I select > Local CDROM it says "Unable to find any devices of the type needed for > this installation type. Would you like to manually select your driver or > use a driver disk?" I have got no idea what to do from here. > > > > If I try to install a fully virtualized guest using virt-install, here > is the command I am running: > > >> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 >> --location=/vm/CentOS-5.2-x86_64-bin-DVD.iso --hvm >> > > This comes up and it just hangs here: > > >> Starting install... >> Creating storage file... 100% |=========================| 5.0 GB 00:00 >> Creating domain... 0 B 00:00 >> ? >> > My xend-debug.log file says this: > > XendInvalidDomain: <Fault 3: ''b5e19b10-7540-902c-b585-f8783447521f''> > Traceback (most recent call last): > File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line > 140, in process > resource = self.getResource() > File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line > 172, in getResource > return self.getServer().getResource(self) > File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line > 351, in getResource > return self.root.getRequestResource(req) > File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 39, > in getRequestResource > return findResource(self, req) > File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 26, > in findResource > next = resource.getPathResource(pathElement, request) > File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 49, > in getPathResource > val = self.getChild(path, request) > File "/usr/lib64/python2.4/site-packages/xen/web/SrvDir.py", line 71, in > getChild > val = self.get(x) > File > "/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", > line 52, in get > return self.domain(x) > File > "/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", > line 44, in domain > dom = self.xd.domain_lookup(x) > File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py", line > 529, in domain_lookup > raise XendInvalidDomain(str(domid)) > XendInvalidDomain: <Fault 3: ''test1''> > > > > > > > > So, now I try doing what looks like the manual way - creating a config > file in /etc/xen and using xm create. > > First I create a file for the HDD: > > >> dd if=/dev/zero of=test1.img bs=1M count=1 seek=1023 >> > > Then I created this config file and placed it at /etc/xen/test: > > >> # -*- mode: python; -*- >> #===========================================================================>> # Python configuration setup for ''xm create''. >> # This script sets the parameters used when a domain is created using >> ''xm create''. >> # You use a separate script for each domain you want to create, or >> # you can set the parameters for the domain on the xm command line. >> #===========================================================================>> >> #---------------------------------------------------------------------------- >> # Kernel image file. >> kernel = "/boot/vmlinuz-2.6.18-92.1.22.el5xen" >> >> # Optional ramdisk. >> #ramdisk = "/boot/initrd.gz" >> ramdisk = "/boot/initrd-2.6.18-92.1.22.el5xen.img" >> #ramdisk = "/boot/initrd-centos5-xen.img" >> >> # The domain build function. Default is ''linux''. >> #builder=''linux'' >> >> # Initial memory allocation (in megabytes) for the new domain. >> # >> # WARNING: Creating a domain with insufficient memory may cause out of >> # memory errors. The domain needs enough memory to boot kernel >> # and modules. Allocating less than 32MBs is not recommended. >> memory = 512 >> >> # A name for your domain. All domains must have different names. >> name = "Test1" >> >> # 128-bit UUID for the domain. The default behavior is to generate a >> new UUID >> # on each call to ''xm create''. >> #uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9" >> >> # List of which CPUS this domain is allowed to use, default Xen picks >> #cpus = "" # leave to Xen to pick >> #cpus = "0" # all vcpus run on CPU0 >> #cpus = "0-3,5,^1" # all vcpus run on cpus 0,2,3,5 >> #cpus = ["2", "3"] # VCPU0 runs on CPU2, VCPU1 runs on CPU3 >> >> # Number of Virtual CPUS to use, default is 1 >> #vcpus = 1 >> >> #---------------------------------------------------------------------------- >> # Define network interfaces. >> >> # By default, no network interfaces are configured. You may have one >> created >> # with sensible defaults using an empty vif clause: >> # >> # vif = [ '''' ] >> # >> # or optionally override backend, bridge, ip, mac, script, type, or >> vifname: >> # >> # vif = [ ''mac=00:16:3e:00:00:11, bridge=xenbr0'' ] >> # >> # or more than one interface may be configured: >> # >> # vif = [ '''', ''bridge=xenbr1'' ] >> >> vif = [ '''' ] >> >> #---------------------------------------------------------------------------- >> # Define the disk devices you want the domain to have access to, and >> # what you want them accessible as. >> # Each disk entry is of the form phy:UNAME,DEV,MODE >> # where UNAME is the device, DEV is the device name the domain will see, >> # and MODE is r for read-only, w for read-write. >> >> #disk = [ ''phy:hda1,hda1,w'' ] >> #disk = [ ''file:/vm/test1.img,ioemu:sda1,w'', >> ''phy:/dev/cdrom,hdc:cdrom,r'' ] >> disk = [ ''file:/vm/test1.img,ioemu:sda1,w'' ] >> >> #---------------------------------------------------------------------------- >> # Define frame buffer device. >> # >> # By default, no frame buffer device is configured. >> # >> # To create one using the SDL backend and sensible defaults: >> # >> # vfb = [ ''type=sdl'' ] >> # >> # This uses environment variables XAUTHORITY and DISPLAY. You >> # can override that: >> # >> # vfb = [ ''type=sdl,xauthority=/home/bozo/.Xauthority,display=:1'' ] >> # >> # To create one using the VNC backend and sensible defaults: >> # >> # vfb = [ ''type=vnc'' ] >> # >> # The backend listens on 127.0.0.1 port 5900+N by default, where N is >> # the domain ID. You can override both address and N: >> # >> # vfb = [ ''type=vnc,vnclisten=127.0.0.1,vncdisplay=1'' ] >> # >> # Or you can bind the first unused port above 5900: >> # >> # vfb = [ ''type=vnc,vnclisten=0.0.0.0,vncunused=1'' ] >> # >> # You can override the password: >> # >> # vfb = [ ''type=vnc,vncpasswd=MYPASSWD'' ] >> # >> # Empty password disables authentication. Defaults to the vncpasswd >> # configured in xend-config.sxp. >> >> #---------------------------------------------------------------------------- >> # Define to which TPM instance the user domain should communicate. >> # The vtpm entry is of the form ''instance=INSTANCE,backend=DOM'' >> # where INSTANCE indicates the instance number of the TPM the VM >> # should be talking to and DOM provides the domain where the backend >> # is located. >> # Note that no two virtual machines should try to connect to the same >> # TPM instance. The handling of all TPM instances does require >> # some management effort in so far that VM configration files (and thus >> # a VM) should be associated with a TPM instance throughout the lifetime >> # of the VM / VM configuration file. The instance number must be >> # greater or equal to 1. >> #vtpm = [ ''instance=1,backend=0'' ] >> >> #---------------------------------------------------------------------------- >> # Set the kernel command line for the new domain. >> # You only need to define the IP parameters and hostname if the domain''s >> # IP config doesn''t, e.g. in ifcfg-eth0 or via DHCP. >> # You can use ''extra'' to set the runlevel and custom environment >> # variables used by custom rc scripts (e.g. VMID=, usr= ). >> >> # Set if you want dhcp to allocate the IP address. >> #dhcp="dhcp" >> # Set netmask. >> #netmask>> # Set default gateway. >> #gateway>> # Set the hostname. >> #hostname= "vm%d" % vmid >> >> # Set root device. >> root = "/dev/sda1 ro" >> >> # Root device for nfs. >> #root = "/dev/nfs" >> # The nfs server. >> #nfs_server = ''192.0.2.1'' >> # Root directory on the nfs server. >> #nfs_root = ''/full/path/to/root/directory'' >> >> # Sets runlevel 4. >> extra = "4" >> >> #---------------------------------------------------------------------------- >> # Configure the behaviour when a domain exits. There are three ''reasons'' >> # for a domain to stop: poweroff, reboot, and crash. For each of these you >> # may specify: >> # >> # "destroy", meaning that the domain is cleaned up as normal; >> # "restart", meaning that a new domain is started in place of the old >> # one; >> # "preserve", meaning that no clean-up is done until the domain is >> # manually destroyed (using xm destroy, for example); or >> # "rename-restart", meaning that the old domain is not cleaned up, but is >> # renamed and a new domain started in its place. >> # >> # In the event a domain stops due to a crash, you have the additional >> options: >> # >> # "coredump-destroy", meaning dump the crashed domain''s core and then >> destroy; >> # "coredump-restart'', meaning dump the crashed domain''s core and the >> restart. >> # >> # The default is >> # >> # on_poweroff = ''destroy'' >> # on_reboot = ''restart'' >> # on_crash = ''restart'' >> # >> # For backwards compatibility we also support the deprecated option >> restart >> # >> # restart = ''onreboot'' means on_poweroff = ''destroy'' >> # on_reboot = ''restart'' >> # on_crash = ''destroy'' >> # >> # restart = ''always'' means on_poweroff = ''restart'' >> # on_reboot = ''restart'' >> # on_crash = ''restart'' >> # >> # restart = ''never'' means on_poweroff = ''destroy'' >> # on_reboot = ''destroy'' >> # on_crash = ''destroy'' >> >> #on_poweroff = ''destroy'' >> #on_reboot = ''restart'' >> #on_crash = ''restart'' >> >> #----------------------------------------------------------------------------- >> # Configure PVSCSI devices: >> # >> #vscsi=[ ''PDEV, VDEV'' ] >> # >> # PDEV gives physical SCSI device to be attached to specified guest >> # domain by one of the following identifier format. >> # - XX:XX:XX:XX (4-tuples with decimal notation which shows >> # "host:channel:target:lun") >> # - /dev/sdxx or sdx >> # - /dev/stxx or stx >> # - /dev/sgxx or sgx >> # - result of ''scsi_id -gu -s''. >> # ex. # scsi_id -gu -s /block/sdb >> # 36000b5d0006a0000006a0257004c0000 >> # >> # VDEV gives virtual SCSI device by 4-tuples (XX:XX:XX:XX) as >> # which the specified guest domain recognize. >> # >> >> #vscsi = [ ''/dev/sdx, 0:0:0:0'' ] >> >> #===========================================================================>> > I then ran this command: > > >> xm create -c test1 >> > > And this is the last few lines of the output before it stops: > > >> Scanning and configuring dmraid supported devices >> Creating root device. >> Mounting root filesystem. >> mount: could not find filesystem ''/dev/root'' >> Setting up other filesystems. >> Setting up new root fs >> setuproot: moving /dev failed: No such file or directory >> no fstab.sys, mounting internal defaults >> setuproot: error mounting /proc: No such file or directory >> setuproot: error mounting /sys: No such file or directory >> Switching to new root and running init. >> unmounting old /dev >> unmounting old /proc >> unmounting old /sys >> switchroot: mount failed: No such file or directory >> Kernel panic - not syncing: Attempted to kill init! >> > > > So I am utterly stumped and extremely frustrated by now that I cannot > get something that is seemingly simple to work! > > Any advise and help would be very greatly appreciated! > > Thanks in advance :) > > Andrew > > > > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > ------------------------------------------------------------------------ > > > No virus found in this incoming message. > Checked by AVG - www.avg.com > Version: 8.0.238 / Virus Database: 270.11.31/2029 - Release Date: 03/29/09 16:56:00 > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, Mar 29, 2009 at 10:17 AM, Andrew Kilham <andrew@estyles.com.au> wrote:> Firstly, if I try to install it as a paravirtualized guest I am running this > command: > >> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 >> --location=/vm/CentOS-5.2-x86_64-bin-DVD.isoFirst of all, paravirtualized guests cannot install off cdrom media (at least not when installing CentOS using virt-install). "location" is suppose to be location of install server, like http://mirror.centos.org/centos/5/os/x86_64/> If I try to install a fully virtualized guest using virt-install, here is > the command I am running: > >> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 >> --location=/vm/CentOS-5.2-x86_64-bin-DVD.iso --hvm > > This comes up and it just hangs here: > >> Starting install... >> Creating storage file... 100% |=========================| 5.0 GB 00:00 >> Creating domain... 0 B 00:00 >> ▒ >That should be virt-install -n test1 -r 512 -f /vm/test1.img -s 5 --cdrom=/vm/CentOS-5.2-x86_64-bin-DVD.iso --hvm and you need to have virt-viewer installed. If you''re working on remote host things might get a little compilcated as you need to enable X forwarding and have a fast-enough link. If everything works out then you should have a GUI of the installation. See http://wiki.centos.org/HowTos/Xen/InstallingCentOSDomU for a quick alternative way of installation, or http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Virtualization_Guide/index.html for complete manual. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>I don''t think installation from ISO is supported or generally done. >Typically, you''d use xen-tools or ''rinse'' directly to do the install > for a Centos guest, just like debootstrap is used for debian based guests.Debootstrap has nothing to do with issues described bellow. Standard virt-install procedure on Xen RHEL 5.X Servers requires local ( or remote) NFS share or HTTP mirror created via "mount loop" corresponding ISO image to nfs shared directory or folder kind of /var/www/rhel with local httpd daemon up and running (Apache HTTP Server on RHEL) Boris. --- On Sun, 3/29/09, Nathan Eisenberg <nathan@atlasnetworks.us> wrote: From: Nathan Eisenberg <nathan@atlasnetworks.us> Subject: RE: [Xen-users] Problems installing guest domains To: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com> Date: Sunday, March 29, 2009, 3:16 PM I don''t think installation from ISO is supported or generally done. Typically, you''d use xen-tools or ''rinse'' directly to do the install for a Centos guest, just like debootstrap is used for debian based guests. Best Regards Nathan Eisenberg Sr. Systems Administrator Atlas Networks, LLC support@atlasnetworks.us http://support.atlasnetworks.us/portal -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Andrew Kilham Sent: Saturday, March 28, 2009 8:17 PM To: xen-users@lists.xensource.com Subject: [Xen-users] Problems installing guest domains Hi, I am trying out Xen for the first time and I am having a few problems with getting it working. The computer is a quad core Intel Xeon with VT enabled, 8gb of RAM and 2 x 15,000rpm SAS drives in RAID1. I have installed CentOS 5 64bit and installed Xen 3.3.0 via yum. I have successfully booted in to dom0. Here is my grub.conf on dom0:> # grub.conf generated by anaconda > # > # Note that you do not have to rerun grub after making changes to this > file > # NOTICE: You have a /boot partition. This means that > # all kernel and initrd paths are relative to /boot/, eg. > # root (hd0,0) > # kernel /vmlinuz-version ro root=/dev/sda2 > # initrd /initrd-version.img > #boot=/dev/sda > default=0 > timeout=5 > splashimage=(hd0,0)/grub/splash.xpm.gz > hiddenmenu > title CentOS (2.6.18-92.1.22.el5xen) > root (hd0,0) > kernel /xen.gz-3.3.0 > module /vmlinuz-2.6.18-92.1.22.el5xen ro root=LABEL=/ > module /initrd-2.6.18-92.1.22.el5xen.img > title CentOS (2.6.18-92.el5) > root (hd0,0) > kernel /vmlinuz-2.6.18-92.el5 ro root=LABEL=/ > initrd /initrd-2.6.18-92.el5.imgHowever, I can''t for the life of me install a guest domain. I have been Googling for the last 3 days and I am extremely confused - it seems like there are multiple ways to do it but none are working for me. I am wanting to use file-based HDD''s for the guests, and I want to install off an iso of CentOS5 on my HDD for now. First I tried using the "virt-install" script. Firstly, should I be able to install a fully virtualized guest or will only para-virtualized work? I will show what happens when I try both. Firstly, if I try to install it as a paravirtualized guest I am running this command:> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 > --location=/vm/CentOS-5.2-x86_64-bin-DVD.iso************************************************************ View for virt-install :- http://lxer.com/module/newswire/view/95262/index.html In general, --location should point to NFS share or local simulated HTTP mirror via Apache Server. ************************************************************ This creates the domain fine and starts what I assume is the CentOS installation - it asks me to first select a language, once I have done that it says "What type of media contains the packages to be installed?" and gives me a list of Local CDROM, Hard drive, NFS, FTP and HTTP. What is this asking me for? ******************************************************** Create local NFS share via "mount loop" and point installer to it. You gonna be done Create local HTTP mirror:- # mkdir -p /var/www/rhel # mount loop /etc/xen/isos/rhel.iso /var/www/rhel and point installer to :- http://locahost/var/www/rhel All written above is standard technique described in Red Hat''s online manuals ******************************************************* If it has already started the installation then surely it knows where to get the packages from? Anyway, if I select Local CDROM it says "Unable to find any devices of the type needed for this installation type. Would you like to manually select your driver or use a driver disk?" I have got no idea what to do from here. If I try to install a fully virtualized guest using virt-install, here is the command I am running:> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 > --location=/vm/CentOS-5.2-x86_64-bin-DVD.iso --hvm**************************************** -c /vm/CentOS-5.2-x86_64-bin-DVD.iso Error in virt-install command line ***************************************** This comes up and it just hangs here:> Starting install... > Creating storage file... 100% |=========================| 5.0 GB 00:00 > Creating domain... 0 B 00:00 > ▒My xend-debug.log file says this: XendInvalidDomain: <Fault 3: ''b5e19b10-7540-902c-b585-f8783447521f''> Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 140, in process resource = self.getResource() File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 172, in getResource return self.getServer().getResource(self) File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 351, in getResource return self.root.getRequestResource(req) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 39, in getRequestResource return findResource(self, req) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 26, in findResource next = resource.getPathResource(pathElement, request) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 49, in getPathResource val = self.getChild(path, request) File "/usr/lib64/python2.4/site-packages/xen/web/SrvDir.py", line 71, in getChild val = self.get(x) File "/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", line 52, in get return self.domain(x) File "/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", line 44, in domain dom = self.xd.domain_lookup(x) File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py", line 529, in domain_lookup raise XendInvalidDomain(str(domid)) XendInvalidDomain: <Fault 3: ''test1''> So, now I try doing what looks like the manual way - creating a config file in /etc/xen and using xm create. First I create a file for the HDD:> dd if=/dev/zero of=test1.img bs=1M count=1 seek=1023Then I created this config file and placed it at /etc/xen/test:> # -*- mode: python; -*- >#===========================================================================> # Python configuration setup for ''xm create''.> # This script sets the parameters used when a domain is created using > ''xm create''. > # You use a separate script for each domain you want to create, or > # you can set the parameters for the domain on the xm command line. >#===========================================================================>>#----------------------------------------------------------------------------> # Kernel image file. > kernel = "/boot/vmlinuz-2.6.18-92.1.22.el5xen" > > # Optional ramdisk. > #ramdisk = "/boot/initrd.gz" > ramdisk = "/boot/initrd-2.6.18-92.1.22.el5xen.img" > #ramdisk = "/boot/initrd-centos5-xen.img" > > # The domain build function. Default is ''linux''. > #builder=''linux'' > > # Initial memory allocation (in megabytes) for the new domain. > # > # WARNING: Creating a domain with insufficient memory may cause out of > # memory errors. The domain needs enough memory to boot kernel > # and modules. Allocating less than 32MBs is not recommended. > memory = 512 > > # A name for your domain. All domains must have different names. > name = "Test1" > > # 128-bit UUID for the domain. The default behavior is to generate a > new UUID > # on each call to ''xm create''. > #uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9" > > # List of which CPUS this domain is allowed to use, default Xen picks > #cpus = "" # leave to Xen to pick > #cpus = "0" # all vcpus run on CPU0 > #cpus = "0-3,5,^1" # all vcpus run on cpus 0,2,3,5 > #cpus = ["2", "3"] # VCPU0 runs on CPU2, VCPU1 runs onCPU3> > # Number of Virtual CPUS to use, default is 1 > #vcpus = 1 > >#----------------------------------------------------------------------------> # Define network interfaces. > > # By default, no network interfaces are configured. You may have one > created > # with sensible defaults using an empty vif clause: > # > # vif = [ '''' ] > # > # or optionally override backend, bridge, ip, mac, script, type, or > vifname: > # > # vif = [ ''mac=00:16:3e:00:00:11, bridge=xenbr0'' ] > # > # or more than one interface may be configured: > # > # vif = [ '''', ''bridge=xenbr1'' ] > > vif = [ '''' ] > >#----------------------------------------------------------------------------> # Define the disk devices you want the domain to have access to, and > # what you want them accessible as. > # Each disk entry is of the form phy:UNAME,DEV,MODE > # where UNAME is the device, DEV is the device name the domain will see, > # and MODE is r for read-only, w for read-write. > > #disk = [ ''phy:hda1,hda1,w'' ] > #disk = [ ''file:/vm/test1.img,ioemu:sda1,w'', > ''phy:/dev/cdrom,hdc:cdrom,r'' ] > disk = [ ''file:/vm/test1.img,ioemu:sda1,w'' ] > >#----------------------------------------------------------------------------> # Define frame buffer device. > # > # By default, no frame buffer device is configured. > # > # To create one using the SDL backend and sensible defaults: > # > # vfb = [ ''type=sdl'' ] > # > # This uses environment variables XAUTHORITY and DISPLAY. You > # can override that: > # > # vfb = [ ''type=sdl,xauthority=/home/bozo/.Xauthority,display=:1'']> # > # To create one using the VNC backend and sensible defaults: > # > # vfb = [ ''type=vnc'' ] > # > # The backend listens on 127.0.0.1 port 5900+N by default, where N is > # the domain ID. You can override both address and N: > # > # vfb = [ ''type=vnc,vnclisten=127.0.0.1,vncdisplay=1'' ] > # > # Or you can bind the first unused port above 5900: > # > # vfb = [ ''type=vnc,vnclisten=0.0.0.0,vncunused=1'' ] > # > # You can override the password: > # > # vfb = [ ''type=vnc,vncpasswd=MYPASSWD'' ] > # > # Empty password disables authentication. Defaults to the vncpasswd > # configured in xend-config.sxp. > >#----------------------------------------------------------------------------> # Define to which TPM instance the user domain should communicate. > # The vtpm entry is of the form ''instance=INSTANCE,backend=DOM'' > # where INSTANCE indicates the instance number of the TPM the VM > # should be talking to and DOM provides the domain where the backend > # is located. > # Note that no two virtual machines should try to connect to the same > # TPM instance. The handling of all TPM instances does require > # some management effort in so far that VM configration files (and thus > # a VM) should be associated with a TPM instance throughout the lifetime > # of the VM / VM configuration file. The instance number must be > # greater or equal to 1. > #vtpm = [ ''instance=1,backend=0'' ] > >#----------------------------------------------------------------------------> # Set the kernel command line for the new domain. > # You only need to define the IP parameters and hostname if thedomain''s> # IP config doesn''t, e.g. in ifcfg-eth0 or via DHCP. > # You can use ''extra'' to set the runlevel and custom environment > # variables used by custom rc scripts (e.g. VMID=, usr= ). > > # Set if you want dhcp to allocate the IP address. > #dhcp="dhcp" > # Set netmask. > #netmask> # Set default gateway. > #gateway> # Set the hostname. > #hostname= "vm%d" % vmid > > # Set root device. > root = "/dev/sda1 ro" > > # Root device for nfs. > #root = "/dev/nfs" > # The nfs server. > #nfs_server = ''192.0.2.1'' > # Root directory on the nfs server. > #nfs_root = ''/full/path/to/root/directory'' > > # Sets runlevel 4. > extra = "4" > >#----------------------------------------------------------------------------> # Configure the behaviour when a domain exits. There are three''reasons''> # for a domain to stop: poweroff, reboot, and crash. For each of theseyou> # may specify: > # > # "destroy", meaning that the domain is cleaned up as normal; > # "restart", meaning that a new domain is started in place ofthe old> # one; > # "preserve", meaning that no clean-up is done until the domainis> # manually destroyed (using xm destroy, for example); or > # "rename-restart", meaning that the old domain is not cleanedup, but is> # renamed and a new domain started in its place. > # > # In the event a domain stops due to a crash, you have the additional > options: > # > # "coredump-destroy", meaning dump the crashed domain''s coreand then> destroy; > # "coredump-restart'', meaning dump the crashed domain''s coreand the> restart. > # > # The default is > # > # on_poweroff = ''destroy'' > # on_reboot = ''restart'' > # on_crash = ''restart'' > # > # For backwards compatibility we also support the deprecated option > restart > # > # restart = ''onreboot'' means on_poweroff = ''destroy'' > # on_reboot = ''restart'' > # on_crash = ''destroy'' > # > # restart = ''always'' means on_poweroff = ''restart'' > # on_reboot = ''restart'' > # on_crash = ''restart'' > # > # restart = ''never'' means on_poweroff = ''destroy'' > # on_reboot = ''destroy'' > # on_crash = ''destroy'' > > #on_poweroff = ''destroy'' > #on_reboot = ''restart'' > #on_crash = ''restart'' > >#-----------------------------------------------------------------------------> # Configure PVSCSI devices: > # > #vscsi=[ ''PDEV, VDEV'' ] > # > # PDEV gives physical SCSI device to be attached to specified guest > # domain by one of the following identifier format. > # - XX:XX:XX:XX (4-tuples with decimal notation which shows > # "host:channel:target:lun") > # - /dev/sdxx or sdx > # - /dev/stxx or stx > # - /dev/sgxx or sgx > # - result of ''scsi_id -gu -s''. > # ex. # scsi_id -gu -s /block/sdb > # 36000b5d0006a0000006a0257004c0000 > # > # VDEV gives virtual SCSI device by 4-tuples (XX:XX:XX:XX) as > # which the specified guest domain recognize. > # > > #vscsi = [ ''/dev/sdx, 0:0:0:0'' ] > >#===========================================================================I then ran this command:> xm create -c test1And this is the last few lines of the output before it stops:> Scanning and configuring dmraid supported devices > Creating root device. > Mounting root filesystem. > mount: could not find filesystem ''/dev/root'' > Setting up other filesystems. > Setting up new root fs > setuproot: moving /dev failed: No such file or directory > no fstab.sys, mounting internal defaults > setuproot: error mounting /proc: No such file or directory > setuproot: error mounting /sys: No such file or directory > Switching to new root and running init. > unmounting old /dev > unmounting old /proc > unmounting old /sys > switchroot: mount failed: No such file or directory > Kernel panic - not syncing: Attempted to kill init!So I am utterly stumped and extremely frustrated by now that I cannot get something that is seemingly simple to work! Any advise and help would be very greatly appreciated! Thanks in advance :) Andrew _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Moreover, performing port the most recent version of Xen@ xensource.org you are obviously missing libvirtd daemon and nice stuff like python-virtinst package providing you virt-install command line utility. Following bellow is an other approach for PV DomUs install via two pygrub profiles ( installation and runtime ones):- http://lxer.com/module/newswire/view/112300/index.html Boris. --- On Mon, 3/30/09, Boris Derzhavets <bderzhavets@yahoo.com> wrote: From: Boris Derzhavets <bderzhavets@yahoo.com> Subject: RE: [Xen-users] Problems installing guest domains To: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com>, "Nathan Eisenberg" <nathan@atlasnetworks.us> Date: Monday, March 30, 2009, 4:00 AM>I don''t think installation from ISO is supported or generally done. >Typically, you''d use xen-tools or ''rinse'' directly to do the install > for a Centos guest, just like debootstrap is used for debian based guests.Debootstrap has nothing to do with issues described bellow. Standard virt-install procedure on Xen RHEL 5.X Servers requires local ( or remote) NFS share or HTTP mirror created via "mount loop" corresponding ISO image to nfs shared directory or folder kind of /var/www/rhel with local httpd daemon up and running (Apache HTTP Server on RHEL) Boris. --- On Sun, 3/29/09, Nathan Eisenberg <nathan@atlasnetworks.us> wrote: From: Nathan Eisenberg <nathan@atlasnetworks.us> Subject: RE: [Xen-users] Problems installing guest domains To: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com> Date: Sunday, March 29, 2009, 3:16 PM I don''t think installation from ISO is supported or generally done. Typically, you''d use xen-tools or ''rinse'' directly to do the install for a Centos guest, just like debootstrap is used for debian based guests. Best Regards Nathan Eisenberg Sr. Systems Administrator Atlas Networks, LLC support@atlasnetworks.us http://support.atlasnetworks.us/portal -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Andrew Kilham Sent: Saturday, March 28, 2009 8:17 PM To: xen-users@lists.xensource.com Subject: [Xen-users] Problems installing guest domains Hi, I am trying out Xen for the first time and I am having a few problems with getting it working. The computer is a quad core Intel Xeon with VT enabled, 8gb of RAM and 2 x 15,000rpm SAS drives in RAID1. I have installed CentOS 5 64bit and installed Xen 3.3.0 via yum. I have successfully booted in to dom0. Here is my grub.conf on dom0:> # grub.conf generated by anaconda > # > # Note that you do not have to rerun grub after making changes to this > file > # NOTICE: You have a /boot partition. This means that > # all kernel and initrd paths are relative to /boot/, eg. > # root (hd0,0) > # kernel /vmlinuz-version ro root=/dev/sda2 > # initrd /initrd-version.img > #boot=/dev/sda > default=0 > timeout=5 > splashimage=(hd0,0)/grub/splash.xpm.gz > hiddenmenu > title CentOS (2.6.18-92.1.22.el5xen) > root (hd0,0) > kernel /xen.gz-3.3.0 > module /vmlinuz-2.6.18-92.1.22.el5xen roroot=LABEL=/> module /initrd-2.6.18-92.1.22.el5xen.img > title CentOS (2.6.18-92.el5) > root (hd0,0) > kernel /vmlinuz-2.6.18-92.el5 ro root=LABEL=/ > initrd /initrd-2.6.18-92.el5.imgHowever, I can''t for the life of me install a guest domain. I have been Googling for the last 3 days and I am extremely confused - it seems like there are multiple ways to do it but none are working for me. I am wanting to use file-based HDD''s for the guests, and I want to install off an iso of CentOS5 on my HDD for now. First I tried using the "virt-install" script. Firstly, should I be able to install a fully virtualized guest or will only para-virtualized work? I will show what happens when I try both. Firstly, if I try to install it as a paravirtualized guest I am running this command:> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 >--location=/vm/CentOS-5.2-x86_64-bin-DVD.iso ************************************************************ View for virt-install :- http://lxer.com/module/newswire/view/95262/index.html In general, --location should point to NFS share or local simulated HTTP mirror via Apache Server. ************************************************************ This creates the domain fine and starts what I assume is the CentOS installation - it asks me to first select a language, once I have done that it says "What type of media contains the packages to be installed?" and gives me a list of Local CDROM, Hard drive, NFS, FTP and HTTP. What is this asking me for? ******************************************************** Create local NFS share via "mount loop" and point installer to it. You gonna be done Create local HTTP mirror:- # mkdir -p /var/www/rhel # mount loop /etc/xen/isos/rhel.iso /var/www/rhel and point installer to :- http://locahost/var/www/rhel All written above is standard technique described in Red Hat''s online manuals ******************************************************* If it has already started the installation then surely it knows where to get the packages from? Anyway, if I select Local CDROM it says "Unable to find any devices of the type needed for this installation type. Would you like to manually select your driver or use a driver disk?" I have got no idea what to do from here. If I try to install a fully virtualized guest using virt-install, here is the command I am running:> virt-install -n test1 -r 512 -f /vm/test1.img -s 5 > --location=/vm/CentOS-5.2-x86_64-bin-DVD.iso --hvm**************************************** -c /vm/CentOS-5.2-x86_64-bin-DVD.iso Error in virt-install command line ***************************************** This comes up and it just hangs here:> Starting install... > Creating storage file... 100% |=========================| 5.0 GB 00:00 > Creating domain... 0 B 00:00 > ▒My xend-debug.log file says this: XendInvalidDomain: <Fault 3: ''b5e19b10-7540-902c-b585-f8783447521f''> Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 140, in process resource = self.getResource() File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 172, in getResource return self.getServer().getResource(self) File "/usr/lib64/python2.4/site-packages/xen/web/httpserver.py", line 351, in getResource return self.root.getRequestResource(req) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 39, in getRequestResource return findResource(self, req) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 26, in findResource next = resource.getPathResource(pathElement, request) File "/usr/lib64/python2.4/site-packages/xen/web/resource.py", line 49, in getPathResource val = self.getChild(path, request) File "/usr/lib64/python2.4/site-packages/xen/web/SrvDir.py", line 71, in getChild val = self.get(x) File "/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", line 52, in get return self.domain(x) File "/usr/lib64/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", line 44, in domain dom = self.xd.domain_lookup(x) File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py", line 529, in domain_lookup raise XendInvalidDomain(str(domid)) XendInvalidDomain: <Fault 3: ''test1''> So, now I try doing what looks like the manual way - creating a config file in /etc/xen and using xm create. First I create a file for the HDD:> dd if=/dev/zero of=test1.img bs=1M count=1 seek=1023Then I created this config file and placed it at /etc/xen/test:> # -*- mode: python; -*- >#===========================================================================> # Python configuration setup for ''xm create''.> # This script sets the parameters used when a domain is created using > ''xm create''. > # You use a separate script for each domain you want to create, or > # you can set the parameters for the domain on the xm command line. >#===========================================================================>>#----------------------------------------------------------------------------> # Kernel image file. > kernel = "/boot/vmlinuz-2.6.18-92.1.22.el5xen" > > # Optional ramdisk. > #ramdisk = "/boot/initrd.gz" > ramdisk "/boot/initrd-2.6.18-92.1.22.el5xen.img" > #ramdisk = "/boot/initrd-centos5-xen.img" > > # The domain build function. Default is ''linux''. > #builder=''linux'' > > # Initial memory allocation (in megabytes) for the new domain. > # > # WARNING: Creating a domain with insufficient memory may cause out of > # memory errors. The domain needs enough memory to boot kernel > # and modules. Allocating less than 32MBs is not recommended. > memory = 512 > > # A name for your domain. All domains must have different names. > name = "Test1" > > # 128-bit UUID for the domain. The default behavior is to generate a > new UUID > # on each call to ''xm create''. > #uuid = "06ed00fe-1162-4fc4-b5d8-11993ee4a8b9" > > # List of which CPUS this domain is allowed to use, default Xen picks > #cpus = "" # leave to Xen to pick > #cpus = "0" # allvcpus run on CPU0> #cpus = "0-3,5,^1" # all vcpus run on cpus 0,2,3,5 > #cpus = ["2", "3"] # VCPU0 runs on CPU2, VCPU1 runs onCPU3> > # Number of Virtual CPUS to use, default is 1 > #vcpus = 1 > >#----------------------------------------------------------------------------> # Define network interfaces. > > # By default, no network interfaces are configured. You may have one > created > # with sensible defaults using an empty vif clause: > # > # vif = [ '''' ] > # > # or optionally override backend, bridge, ip, mac, script, type, or > vifname: > # > # vif = [ ''mac=00:16:3e:00:00:11, bridge=xenbr0'' ] > # > # or more than one interface may be configured: > # > # vif = [ '''', ''bridge=xenbr1'' ] > > vif = [ '''']> >#----------------------------------------------------------------------------> # Define the disk devices you want the domain to have access to, and > # what you want them accessible as. > # Each disk entry is of the form phy:UNAME,DEV,MODE > # where UNAME is the device, DEV is the device name the domain will see, > # and MODE is r for read-only, w for read-write. > > #disk = [ ''phy:hda1,hda1,w'' ] > #disk = [ ''file:/vm/test1.img,ioemu:sda1,w'', > ''phy:/dev/cdrom,hdc:cdrom,r'' ] > disk = [ ''file:/vm/test1.img,ioemu:sda1,w'' ] > >#----------------------------------------------------------------------------> # Define frame buffer device. > # > # By default, no frame buffer device is configured. > # > # To create one using the SDL backend and sensible defaults: > # > # vfb = [ ''type=sdl'' ] > # > # This usesenvironment variables XAUTHORITY and DISPLAY. You> # can override that: > # > # vfb = [ ''type=sdl,xauthority=/home/bozo/.Xauthority,display=:1'']> # > # To create one using the VNC backend and sensible defaults: > # > # vfb = [ ''type=vnc'' ] > # > # The backend listens on 127.0.0.1 port 5900+N by default, where N is > # the domain ID. You can override both address and N: > # > # vfb = [ ''type=vnc,vnclisten=127.0.0.1,vncdisplay=1'' ] > # > # Or you can bind the first unused port above 5900: > # > # vfb = [ ''type=vnc,vnclisten=0.0.0.0,vncunused=1'' ] > # > # You can override the password: > # > # vfb = [ ''type=vnc,vncpasswd=MYPASSWD'' ] > # > # Empty password disables authentication. Defaults to the vncpasswd > # configured inxend-config.sxp.> >#----------------------------------------------------------------------------> # Define to which TPM instance the user domain should communicate. > # The vtpm entry is of the form ''instance=INSTANCE,backend=DOM'' > # where INSTANCE indicates the instance number of the TPM the VM > # should be talking to and DOM provides the domain where the backend > # is located. > # Note that no two virtual machines should try to connect to the same > # TPM instance. The handling of all TPM instances does require > # some management effort in so far that VM configration files (and thus > # a VM) should be associated with a TPM instance throughout the lifetime > # of the VM / VM configuration file. The instance number must be > # greater or equal to 1. > #vtpm = [ ''instance=1,backend=0'']> >#----------------------------------------------------------------------------> # Set the kernel command line for the new domain. > # You only need to define the IP parameters and hostname if thedomain''s> # IP config doesn''t, e.g. in ifcfg-eth0 or via DHCP. > # You can use ''extra'' to set the runlevel and custom environment > # variables used by custom rc scripts (e.g. VMID=, usr= ). > > # Set if you want dhcp to allocate the IP address. > #dhcp="dhcp" > # Set netmask. > #netmask> # Set default gateway. > #gateway> # Set the hostname. > #hostname= "vm%d" % vmid > > # Set root device. > root = "/dev/sda1 ro" > > # Root device for nfs. > #root = "/dev/nfs" > # The nfs server. > #nfs_server = ''192.0.2.1'' > # Root directory on the nfs server. > #nfs_root ''/full/path/to/root/directory'' > > # Sets runlevel 4. > extra = "4" > >#----------------------------------------------------------------------------> # Configure the behaviour when a domain exits. There are three''reasons''> # for a domain to stop: poweroff, reboot, and crash. For each of theseyou> # may specify: > # > # "destroy", meaning that the domain is cleaned up as normal; > # "restart", meaning that a new domain is started in place ofthe old> # one; > # "preserve", meaning that no clean-up is done until the domainis> # manually destroyed (using xm destroy, for example); or > # "rename-restart", meaning that the old domain is not cleanedup, but is> # renamed and a new domain started in its place. > # > # In the event a domain stops due to a crash, you have the additional > options: > # > #"coredump-destroy", meaning dump the crashed domain''s core and then> destroy; > # "coredump-restart'', meaning dump the crashed domain''s coreand the> restart. > # > # The default is > # > # on_poweroff = ''destroy'' > # on_reboot = ''restart'' > # on_crash = ''restart'' > # > # For backwards compatibility we also support the deprecated option > restart > # > # restart = ''onreboot'' means on_poweroff = ''destroy'' > # on_reboot = ''restart'' > # on_crash = ''destroy'' > # > # restart = ''always'' means on_poweroff = ''restart'' > # on_reboot = ''restart'' > # on_crash = ''restart'' > # > # restart = ''never'' means on_poweroff = ''destroy'' > # on_reboot = ''destroy'' > # on_crash = ''destroy'' > > #on_poweroff = ''destroy'' > #on_reboot = ''restart'' > #on_crash ''restart'' > >#-----------------------------------------------------------------------------> # Configure PVSCSI devices: > # > #vscsi=[ ''PDEV, VDEV'' ] > # > # PDEV gives physical SCSI device to be attached to specified guest > # domain by one of the following identifier format. > # - XX:XX:XX:XX (4-tuples with decimal notation which shows > # "host:channel:target:lun") > # - /dev/sdxx or sdx > # - /dev/stxx or stx > # - /dev/sgxx or sgx > # - result of ''scsi_id -gu -s''. > # ex. # scsi_id -gu -s /block/sdb > # 36000b5d0006a0000006a0257004c0000 > # > # VDEV gives virtual SCSI device by 4-tuples (XX:XX:XX:XX) as > # which the specified guest domain recognize. > # > > #vscsi = [ ''/dev/sdx, 0:0:0:0'' ] > >#===========================================================================I then ran this command:> xm create -c test1And this is the last few lines of the output before it stops:> Scanning and configuring dmraid supported devices > Creating root device. > Mounting root filesystem. > mount: could not find filesystem ''/dev/root'' > Setting up other filesystems. > Setting up new root fs > setuproot: moving /dev failed: No such file or directory > no fstab.sys, mounting internal defaults > setuproot: error mounting /proc: No such file or directory > setuproot: error mounting /sys: No such file or directory > Switching to new root and running init. > unmounting old /dev > unmounting old /proc > unmounting old /sys > switchroot: mount failed: No such file or directory > Kernel panic - not syncing: Attempted to kill init!So I am utterly stumped and extremely frustrated by now that I cannot get something that is seemingly simple to work! Any advise and help would be very greatly appreciated! Thanks in advance :) Andrew _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users