nbitspoken
2006-Aug-18 14:39 UTC
[Xen-users] Error: Kernel image does not exist: /boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1
Hello,
I am trying to boot an RHAS 4 release 3 guest kernel (the binary release
available through Redhat Network) from a SLES 10 dom0 host (the binary
release available through Suse''s software distribution channel). Both
systems are installed as physical partitions in a standard multiboot
configuration on a single hard drive on a recent vintage HP Pavilion
(zd7380) notebook PC with a single 5400 RPM built-in hard drive and 2 G
of RAM (which is more like 1.5 G according to ''free.'' ).
I''ve been
struggling for several days with this problem following an uneventful
(with the exception of the proprietary nvidia driver I had to uninstall)
boot into the SLES 10 domain 0 kernel. The problem is that I cannot get
beyond xm''s catatonic retort:
Error: Kernel image does not exist:
/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1
whenever I try to boot the guest domain mentioned above using the
command line (per the xen 3.0 user manual):
xm create [-c] /etc/xen/vm/rhas4,
where rhas4 is the name of the configuration file for the guest domain
(see below). To offset the natural suspicion that I have simply got the
path wrong, I submit the following transcript of my ''path
verification''
procedure (executed from a running Dom0):
reproach:~ # mount /dev/hda7 /mnt/hda7
# Verify that the desired device has been exported to the guest domain:
reproach:~ # grep phy /etc/xen/vm/rhas4
# Each disk entry is of the form phy:UNAME,DEV,MODE
disk = [ ''phy:hda7,hda7,w'' ]
# disk = [ ''phy:vg1/orabase1,/oracle/orabase1,w'' ]
# disk = [ ''phy:vg1/oas1,vg1,/oracle/oas1,w'' ]
# Verify that the /etc/fstab file in the guest domain agrees with the
exported
# name of the desired device:
reproach:~ # grep hda7 /mnt/hda7/etc/fstab
/dev/hda7 / ext3 defaults 1 1
# Compare the kernel and ramdisk lines from the config file with the
paths relative to the exported
# device of the files to which these lines purport to refer:
reproach:~ # cat /etc/xen/vm/rhas4 | grep "kernel ="
kernel =
"/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1"
reproach:~ # ls /mnt/hda7/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1
/mnt/hda7/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1
reproach:~ # cat /etc/xen/vm/rhas4 | grep "ramdisk ="
ramdisk = "/boot/initrd-2.6.16-xen3_86.1_rhel4.1.img
reproach:~ # ls /mnt/hda7/boot/initrd-2.6.16-xen3_86.1_rhel4.1.img
/mnt/hda7/boot/initrd-2.6.16-xen3_86.1_rhel4.1.img
I have indented all output lines to facilitate visual comparison of the
relevant lines.
# Now Attempt to boot the guest domain using the xm command-line utility:
reproach:~ # umount /dev/hda7
reproach:~ # xm create -c /etc/xen/vm/rhas4
Using config file "/etc/xen/vm/rhas4".
Error: Kernel image does not exist: /boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1
Some help would be very much appreciated. BTW, the RHAS 4.3 installation
is the original partition. It is not
critical, but I would prefer not to destroy it because I have invested
considerable time setting installing and configuring
its contents. There is, for instance, an Oracle 10g Enterprise database,
app servers, IDE''s etc. I have little doubt that I
am putting that system at some risk, but how much risk, assuming that I
don''t allow other domains write access to the
guest file system? Also, what will happen if I try to boot the guest
partition outside of xen (i.e. natively from grub) after
running it as a xen domain (assuming I ever get beyond the "kernel image
does not exist stage)?
Having said that, I don''t want to let the focus slip to safety issues.
My first priority is just to get off the ground with booting
the guest domain.
TIA,
nb
# -*- mode: python; -*-
#===========================================================================#
Python configuration setup for ''xm create''.
# This script sets the parameters used when a domain is created using
''xm create''.
# You use a separate script for each domain you want to create, or
# you can set the parameters for the domain on the xm command line.
#===========================================================================
#----------------------------------------------------------------------------
# Kernel image file.
kernel = "/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1"
# Optional ramdisk.
ramdisk = "/boot/initrd-2.6.16-xen3_86.1_rhel4.1.img"
# The domain build function. Default is ''linux''.
#builder=''linux''
# Initial memory allocation (in megabytes) for the new domain.
memory = 512
# A name for your domain. All domains must have different names.
name = "rhas1"
# List of which CPUS this domain is allowed to use, default Xen picks
#cpus = "" # leave to Xen to pick
#cpus = "0" # all vcpus run on CPU0
#cpus = "0-3,5,^1" # run on cpus 0,2,3,5
# Number of Virtual CPUS to use, default is 1
#vcpus = 1
#----------------------------------------------------------------------------
# Define network interfaces.
# By default, no network interfaces are configured. You may have one
created
# with sensible defaults using an empty vif clause:
#
# vif = [ '''' ]
#
# or optionally override backend, bridge, ip, mac, script, type, or vifname:
#
# vif = [ ''mac=00:16:3e:00:00:11, bridge=xenbr0'' ]
#
# or more than one interface may be configured:
#
# vif = [ '''', ''bridge=xenbr1'' ]
# vif = [ '''' ]
vif = [ ''mac=00:16:3e:17:b9:d8'' ]
#----------------------------------------------------------------------------
# Define the disk devices you want the domain to have access to, and
# what you want them accessible as.
# Each disk entry is of the form phy:UNAME,DEV,MODE
# where UNAME is the device, DEV is the device name the domain will see,
# and MODE is r for read-only, w for read-write.
disk = [ ''phy:hda7,hda7,w'' ]
# disk = [ ''phy:vg1/orabase1,/oracle/orabase1,w'' ]
# disk = [ ''phy:vg1/oas1,vg1,/oracle/oas1,w'' ]
#----------------------------------------------------------------------------
# Define to which TPM instance the user domain should communicate.
# The vtpm entry is of the form
''instance=INSTANCE,backend=DOM''
# where INSTANCE indicates the instance number of the TPM the VM
# should be talking to and DOM provides the domain where the backend
# is located.
# Note that no two virtual machines should try to connect to the same
# TPM instance. The handling of all TPM instances does require
# some management effort in so far that VM configration files (and thus
# a VM) should be associated with a TPM instance throughout the lifetime
# of the VM / VM configuration file. The instance number must be
# greater or equal to 1.
#vtpm = [ ''instance=1,backend=0'' ]
#----------------------------------------------------------------------------
# Set the kernel command line for the new domain.
# You only need to define the IP parameters and hostname if the
domain''s
# IP config doesn''t, e.g. in ifcfg-eth0 or via DHCP.
# You can use ''extra'' to set the runlevel and custom
environment
# variables used by custom rc scripts (e.g. VMID=, usr= ).
# Set if you want dhcp to allocate the IP address.
dhcp="dhcp"
# Set netmask.
netmask="255.255.255.0"
# Set default gateway.
gateway="192.168.1.1"
# Set the hostname.
# hostname= "vm%d" % vmid
hostname = "absolute"
# Set root device.
root = "/dev/hda7 ro"
# Root device for nfs.
#root = "/dev/nfs"
# The nfs server.
#nfs_server = ''169.254.1.0''
# Root directory on the nfs server.
#nfs_root = ''/full/path/to/root/directory''
# Sets runlevel 4.
# extra = "5"
extra = ''TERM=xterm''
#----------------------------------------------------------------------------
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
Henning Sprang
2006-Aug-21 12:57 UTC
Re: [Xen-users] Error: Kernel image does not exist: /boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1
Hi, On 8/18/06, nbitspoken <nbitspoken@comcast.net> wrote:> [...] > reproach:~ # cat /etc/xen/vm/rhas4 | grep "kernel =" > kernel = "/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1" > reproach:~ # ls /mnt/hda7/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1 > /mnt/hda7/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1 >So the kernel you are trying to boot is in the domU system you want to boot? I think you got something totally wrong - the kernel to boot the domU has to be located in the dom0 filesystem! BTW: your mail is very long and hard to understand, please read http://www.catb.org/~esr/faqs/smart-questions.html which will help you getting help and others toi understand you... BTW2: what kernel are you tring to boiot there? I didn''t know RHEL conatins a xen kernel - aren''t they just saying, xen is not usable yet in the media? Henning _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
nbitspoken
2006-Aug-23 10:38 UTC
Re: [Xen-users] Error: Kernel image does not exist: /boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1
Henning, I apologize for the mistake. I am new to this board and clicked on the wrong link. I was not trying to initiate a private exchange or obtain "free support." Per your request, below please find the content of my reply. Henning Sprang wrote:> I am not gonna read this mail. > > I don''t give peronsal support for free if it''s not on the mailing list, > for the community. > > Post this mail to the list and I will read it eventually. > Or ask me for a price quote for personal, commercial(as opposed to > community) support. > > Henning > > > nbits wrote: > >> Henning, >> >> Thanks for responding! :) >> >> Thanks for responding. I''ll try to brief. Here is the quote from the 3.0 >> manual: >> >> 6.1 Exporting Physical Devices as VBDs >> One of the simplest configurations is to directly export >> individual partitions from domain >> 0 to other domains. To achieve this use the phy: specifer in your >> domain >> confguration file. >> >> If DomU (the guest) had to be on the same "filesystem" (as you say) as >> the host, then it wouldn''t make sense to "export individual partitions >> >> >>> from domain 0 to other domains," because a partition *is* a file system >>> >> (at least once it has been formatted), and the exported file system is >> certainly not the same as the one on which Dom0 is running! As the >> language of section 6.1 suggests, running separate partitions as guest >> domains is a standard or "simplest" usage scenario, and building a >> filesystem inside a file mounted as a loopback device (as most people >> seem to be doing at this stage) is arguably a secondary, advanced usage >> in relation to the "simplest configuration" that I am trying to achieve >> here. Now it so happens that the physical partition to be exported is >> not pristine, but virginity is not a requirement here. Domain 0 doesn''t >> care whether or not DomU is also natively bootable or chain loaded or >> whatever when run outside Xen, -- it just needs a kernel, a root >> filesystem, and an initrd, and all that information is supplied in the >> guest domain''s configuration file (so no separate menu.lst or grub.conf >> should be needed). >> I have interleaved the remainder of my reply with the text of your >> message below. >> >> >> Henning Sprang wrote: >> >> >>> Hi, >>> >>> On 8/18/06, nbitspoken <nbitspoken@comcast.net> wrote: >>> >>> >>>> [...] >>>> reproach:~ # cat /etc/xen/vm/rhas4 | grep "kernel =" >>>> kernel >>>> "/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1" >>>> reproach:~ # ls /mnt/hda7/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1 >>>> >>>> /mnt/hda7/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1 >>>> >>>> >>> So the kernel you are trying to boot is in the domU system you want to >>> boot? >>> >>> *It is on the partition, visible to dom0 as /dev/hda7, also to be >>> exported to domU as "hda7" (the choice of a device name for DomU is >>> arbitrary).* >>> >> I think you got something totally wrong - the kernel to boot the domU >> >> >>> has to be located in the dom0 filesystem! >>> >>> >> *If this were so, then there would be no reason to put device names like >> /dev/hda in the guest domain configuration file, >> because as long as we stay in the file system, a system path name will >> suffice, e.g., /home/henning/vmlinuz... >> Neither Xen nor Linux imposes any location requirements on the kernal as >> long as all the required devices are visible to >> the bootloader. The only additional requirement imposed by Xen is that >> the kernel and the initrd are on the same filesystem.* >> >> >> >>> BTW: your mail is very long and hard to understand, please read >>> http://www.catb.org/~esr/faqs/smart-questions.html which will help you >>> getting help and others toi understand you... >>> >>> >> *I should not have included the initial part about my hardware -- that >> was to prevent replies to the effect that I am not giving >> enough information. On retrospect, I would agree that the first part was >> superfluous overkill. In the second part, I grep the >> relevant files to show that the paths are correct. If you don''t do much >> Unix, this might put you off. I also included a copy >> of my config file, which lengthened the message considerably but >> provided what I take to be essential information. * >>>> >>> BTW2: what kernel are you tring to boiot there? I didn''t know RHEL >>> conatins a xen kernel - aren''t they just saying, xen is not usable yet >>> in the media? >>> >> *Not in the least as far as I can see. Numerous independent entities have >> confirmed Red Hat''s support >> of Xen, and if you have a subscription (I paid $50 for the academic) you >> can download prebuilt >> Xen kernels from RHN.* >>nb>> >>> Henning >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >>> >>> > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users