I realize this might be one of those religious topics, but is there any good rule of thumb as to how to setup storage for a Xen domU? Without really knowing any better, I''m basically going to make a disk image of the OS and the create a ''data'' partition on my raid 5 volume for each virtual machine. This is for use at home, so in reality it probably doesn''t matter -- I just don''t want to make any *really* stupid moves. If you need it, general specs on the machine are: quad core, 8GB ram, a 160GB boot volume and then 4x320 raid5. The 160GB will contain the disk images while the raid5 will be allocated for "data" (database, file server, myth, etc). I guess I at least have a plan :-) ... but I''d be interested in some experienced feedback. Most of the stuff I''ve come across are how to do these tasks and not about planning. Thanks! -Rob _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>This is for use at home, so in reality it probably doesn''t matter -- I just don''t want to make any *really* stupid moves. If you need it, general specs on the machine are: quad core, 8GB ram, a 160GB boot volume and then 4x320 raid5. The 160GB will contain the disk >images while the raid5 will be allocated for "data" (database, file server, myth, etc).I have this exact config at home :) Only difference would be the raid type and controller I am sure, I used a pretty high end LSI SAS Card with sata''s hung off it. I would seriously recommend LVM, its so flexible, and I would take that raid 5 and replace it with mirrors personally, but you are using it at home I guess. Most people forget to factor in the downside to raid 5: Slow regens that kill performance while being non redundant during the only one possible failure it could have. Godo luck! jlc _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Joseph L. Casale wrote:>> This is for use at home, so in reality it probably doesn''t matter -- I just don''t want to make any *really* stupid moves. If you need it, general specs on the machine are: quad core, 8GB ram, a 160GB boot volume and then 4x320 raid5. The 160GB will contain the disk >images while the raid5 will be allocated for "data" (database, file server, myth, etc). >> > > I have this exact config at home :) Only difference would be the raid type and controller I am sure, I used a pretty high end LSI SAS Card with sata''s hung off it. I would seriously recommend LVM, its so flexible, and I would take that raid 5 and replace it with mirrors personally, but you are using it at home I guess. Most people forget to factor in the downside to raid 5: Slow regens that kill performance while being non redundant during the only one possible failure it could have. > > Godo luck! > jlc > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > >But is there really a performance difference between LVM & file based VM''s? -- Kind Regards Rudi Ahlers CEO, SoftDux Web: http://www.SoftDux.com Check out my technical blog, http://blog.softdux.com for Linux or other technical stuff, or visit http://www.WebHostingTalk.co.za for Web Hosting stuff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Rudi Ahlers wrote:> Joseph L. Casale wrote: >>> This is for use at home, so in reality it probably doesn''t matter -- >>> I just don''t want to make any *really* stupid moves. >>> >> >> I would seriously recommend LVM, its so flexible > But is there really a performance difference between LVM & file based > VM''s? >Yes, but depending on how you use it, the effect might not be what you expect. Example 1 : - you have 16G mem : 4G for domU, 12 G for dom0 - 1 domU running, with 4G root filesystem In this scenario, you''d most likely find file-based VM perform better since dom0 will cache domU''s root fs. Example 2 : - you have 8G mem : 7G for for 7 domUs (1G each), 1G for dom0 - each domU comes with 4G root filesystem In this scenario, the performance of file-based VM will be lower compared to LVM-based VM, due to dom0''s filesystem operation overhead (including journaling). There are other pros and cons for file vs LVM. Personally, I prefer LVM with each LV on dom0 maps to a partition on domU (hda1,sda1, etc) because : - I can easily convert a domU into a physical machine if necessary - I can easily mount domU''s fs directly on dom0 (e.g for troubleshooting purposes) - I can get per-LV I/O statistic from dom0 using tools like iostat - I can use LVM snapshot to backup domU''s fs without shutting it down This setup also means that I can''t create a VM using frontends like virt-manager (which maps file/LV/partition on dom0 to a disk on domU), but that doesn''t matter since I prefer to create VMs manually using prebuilt-template anyway :D Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > But is there really a performance difference between LVM & filebased> > VM''s? > > > > Yes, but depending on how you use it, the effect might not be what you > expect. > > Example 1 : > - you have 16G mem : 4G for domU, 12 G for dom0 > - 1 domU running, with 4G root filesystem > In this scenario, you''d most likely find file-based VM perform better > since dom0 will cache domU''s root fs.No this should not happen! Under HVM it did, and maybe still does, but it''s bad! It means that writes that DomU _thinks_ have been committed to disk may not have been, and really bad things can happen. Do you know for sure that file: based devices use Dom0''s page cache? James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha wrote:> Rudi Ahlers wrote: >> Joseph L. Casale wrote: >>>> This is for use at home, so in reality it probably doesn''t matter >>>> -- I just don''t want to make any *really* stupid moves. >>> >>> I would seriously recommend LVM, its so flexible >> But is there really a performance difference between LVM & file based >> VM''s? >> > > Yes, but depending on how you use it, the effect might not be what you > expect. > > Example 1 : > - you have 16G mem : 4G for domU, 12 G for dom0 > - 1 domU running, with 4G root filesystem > In this scenario, you''d most likely find file-based VM perform better > since dom0 will cache domU''s root fs. > > Example 2 : > - you have 8G mem : 7G for for 7 domUs (1G each), 1G for dom0 > - each domU comes with 4G root filesystem > In this scenario, the performance of file-based VM will be lower > compared to LVM-based VM, due to dom0''s filesystem operation overhead > (including journaling). > > There are other pros and cons for file vs LVM. Personally, I prefer > LVM with each LV on dom0 maps to a partition on domU (hda1,sda1, etc) > because : > - I can easily convert a domU into a physical machine if necessary > - I can easily mount domU''s fs directly on dom0 (e.g for > troubleshooting purposes) > - I can get per-LV I/O statistic from dom0 using tools like iostat > - I can use LVM snapshot to backup domU''s fs without shutting it down > > This setup also means that I can''t create a VM using frontends like > virt-manager (which maps file/LV/partition on dom0 to a disk on domU), > but that doesn''t matter since I prefer to create VMs manually using > prebuilt-template anyway :D > > Regards, > > Fajar >ok, what you''re saying does make sense, so.... - If you say you can easily convirt a VPS to a physical machine, do you just move the LV to a new machine? - I can mount image based domU as well :) - But doesn''t xen have IO stats as well? - I prefer not to use snapshot backups, since it makes it more difficult to restore just one or two files. We mainly use cPanel & Plesk for our hosting, so I use the control panel''s backup over FTP instead -- Kind Regards Rudi Ahlers CEO, SoftDux Web: http://www.SoftDux.com Check out my technical blog, http://blog.softdux.com for Linux or other technical stuff, or visit http://www.WebHostingTalk.co.za for Web Hosting stuff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Rudi Ahlers wrote:> Fajar A. Nugraha wrote: >> >> There are other pros and cons for file vs LVM. Personally, I prefer >> LVM with each LV on dom0 maps to a partition on domU (hda1,sda1, etc) >> because : >> - I can easily convert a domU into a physical machine if necessary >> - I can easily mount domU''s fs directly on dom0 (e.g for >> troubleshooting purposes) >> - I can get per-LV I/O statistic from dom0 using tools like iostat >> - I can use LVM snapshot to backup domU''s fs without shutting it down >>> ok, what you''re saying does make sense, so.... > > > - If you say you can easily convirt a VPS to a physical machine, do > you just move the LV to a new machine?Depends. The power of this method became apparent when each domU has its own VG located on a SAN (or iscsi). I can simply reassign that VG''s disks to a new machine, and setup the appropriate fstab/grub.conf/initrd (I''m only talking about PV linux here). Note that installing to a partition (each domU on different disk) has this benefit as well, but LVM is easier to manage when you need to resize.> - I can mount image based domU as well :)Yes you can :) This part is actually more of assign-to-domU-disk vs assign-to-domU-partition rather than file-based vs LVM-based. If an LV/file is assigned to domU disk, you need extra steps (usually involving kpartx and/or vgscan) to mount domU''s fs.> - But doesn''t xen have IO stats as well?Yes, but AFAIK it shows only total number of requests. iostat can show each LVs IO rates (per second) in sectors or in requests. /proc/diskstats can show each LVs total IO stats.> - I prefer not to use snapshot backups, since it makes it more > difficult to restore just one or two files. We mainly use cPanel & > Plesk for our hosting, so I use the control panel''s backup over FTP > instead >You don''t need snapshot backups then. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ok, so the summary seems to be that images are ok for small stuff (and tinkering) but I really want to partition it out (I assume this becomes stronger as the partition grows). RAID 5 may or may not be a good idea. My assumption (correct me here) is that the partition/logical volume I create for the Xen domU looks like a disk? Then, Xen will partition that disk out -- and I will create swap, /boot, /, etc partitions. Now, to continue on the beginner topics. Using my existing RAID volume + LVM, I created a logical volume for a file server. I''ve specified it as: disk = [ "phy:lvm-raid/FileVolGroup,sda1,w" ] But, apparantly the installer doesn''t see it? What am I missing? I''ve tried it as /dev/lvm-raid/FileVolGroup. I can mount it in dom0 (and then umount it of course). I saw a description where they formatted it, so I tried that and it still isn''t seen. Do I need to go at it via the /dev/mapper/lvm--raid-FileVolGroup? Something else? This has gotta be basic, so basic I''m not finding anything. :-p Thanks! -Rob _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Rob Greene wrote:> Ok, so the summary seems to be that images are ok for small stuff (and > tinkering) but I really want to partition it out (I assume this becomes > stronger as the partition grows). RAID 5 may or may not be a good idea.Right> My assumption (correct me here) is that the partition/logical volume I > create for the Xen domU looks like a disk? Then, Xen will partition > that disk out -- and I will create swap, /boot, /, etc partitions.Xen doesn''t do any partitioning. I suppose it would be possible to create a LV and then partition that block device up more (like you would a hard drive) but I think it is more common to create an LV for each partition you need and just attach it to the Xen vm in the xen config.> > Now, to continue on the beginner topics. Using my existing RAID volume > + LVM, I created a logical volume for a file server. I''ve specified it as: > > disk = [ "phy:lvm-raid/FileVolGroup,sda1,w" ] > > But, apparantly the installer doesn''t see it? What am I missing? I''ve > tried it as /dev/lvm-raid/FileVolGroup. I can mount it in dom0 (and > then umount it of course). I saw a description where they formatted it, > so I tried that and it still isn''t seen. Do I need to go at it via the > /dev/mapper/lvm--raid-FileVolGroup? Something else?If you are using a cd based install I think you will have to do HVM for the installer to work right. So you might need a line more like ''phy:/dev/xenhost-lvm/winxp,ioemu:hda,w'' thats from my winxp guest but you can change it up for what suites your needs. Of course I could be wrong on that assumption. -- Nick Anderson <nick@anders0n.net> http://www.cmdln.org http://www.anders0n.net _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nick Anderson wrote:> Rob Greene wrote: >> I''ve specified it as: >> >> disk = [ "phy:lvm-raid/FileVolGroup,sda1,w" ] >> >> But, apparantly the installer doesn''t see it?> If you are using a cd based install I think you will have to do HVM > for the installer to work right.Not necessarily. Some distros (e.g : RHEL5) support installers for PV.> So you might need a line more like > ''phy:/dev/xenhost-lvm/winxp,ioemu:hda,w''In RHEL 5 at least, anaconda requires the block device/file to be mapped as disk (e.g hda, sda) not partition (e.g. sda1). Try changing it to disk = [ "phy:lvm-raid/FileVolGroup,sda,w" ] Having said that, if you''re building lots of identical domUs, using prebuilt template (and mappingthe block device as sda1 instead of sda) should be faster than installer. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha wrote on Wed, 30 Apr 2008 10:15:05 +0700:> In RHEL 5 at least, anaconda requires the block device/file to be mapped > as disk (e.g hda, sda) not partition (e.g. sda1). Try changing it to > disk = [ "phy:lvm-raid/FileVolGroup,sda,w" ]This doesn''t work either. I''m just playing around with the same stuff. I have existing xen vms on small lvm partitions (=LVs) seen by xen as xvda. They just contain the OS and a bit of extra space. There''s no problem to create a bigger LV for the data, format it and then attach it as sda1 or so to that existing vm and mount it under /home for instance. But the next step to create the xen vm itself "directly" on an LV fails. I first tried a kickstart installation that failed with missing disk sda. Then I started manually with virt-manager, to see what actually happens. If the LV is already formatted the installer tells me the install target is currently mounted as a loop device and I need to reformat/reinitialize. If it is not formatted it tells me xvda is unreadable and I have to initialize it. There''s no way avoiding using that LV as xvda disk. How did you manage to install to an LV without using xvda? I''m trying this on CentOS 5.1 with Xen 3.2. Kai -- Kai Schätzl, Berlin, Germany Get your web at Conactive Internet Services: http://www.conactive.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Tue, Apr 29, 2008 at 10:15 PM, Fajar A. Nugraha <fajar@fajar.net> wrote:> In RHEL 5 at least, anaconda requires the block device/file to be mapped > as disk (e.g hda, sda) not partition (e.g. sda1). Try changing it to > disk = [ "phy:lvm-raid/FileVolGroup,sda,w" ] >Ok, so (at least in CentOS/RHEL?) the Xen domU sees the dom0 partition not as a partition but as a physical disk? sda1, sda, and hda all failed (sda1, sda were not visible while hda kept generating installer errors detecting the drives). hda1 shows up as weirdly named disk (/dev/hda1) that I need to partition out in the domU installer (ie, /dev/hda11, /dev/hda12, etc). disk = [ "phy:lvm-raid/FileVolGroup,hda1,w", "phy:lvm-raid/io-swap,hda2,w" ] # assumption was that these were partitions... What concerns me is that if/when I want to grow disk available to a domU (this on in particular is a file server), I can''t just grow what shows up as hda1, right? I''d need to add a new "disk" and extend a LVM within the domU? Don''t these layers of RAID+LVM (dom0) and Xen block device and LVM (domU) come at a price? I was hoping that the partition got mapped straight into the domU so I avoided any extra stuff within the domU and I would have the added bonus of being able to mount the drives and copy files between when setting this up.> Having said that, if you''re building lots of identical domUs, using > prebuilt template (and mappingthe block device as sda1 instead of sda) > should be faster than installer. >If I can only do partitions, how does this work then? Unless all the drives are identical in size and I ''dd'' the device?? Thanks! -Rob _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> On Tue, Apr 29, 2008 at 10:15 PM, Fajar A. Nugraha <fajar@fajar.net> wrote: > > In RHEL 5 at least, anaconda requires the block device/file to be mapped as disk (e.g hda, sda) not partition (e.g. sda1). Try changing it to > disk = [ "phy:lvm-raid/FileVolGroup,sda,w" ] >Ok, so (at least in CentOS/RHEL?) the Xen domU sees the dom0 partition not as a partition but as a physical disk? sda1, sda, and hda all failed (sda1, sda were not visible while hda kept generating installer errors detecting the drives). hda1 shows up as weirdly named disk (/dev/hda1) that I need to partition out in the domU installer (ie, /dev/hda11, /dev/hda12, etc). disk = [ "phy:lvm-raid/FileVolGroup,hda1,w", "phy:lvm-raid/io-swap,hda2,w" ] # assumption was that these were partitions... What concerns me is that if/when I want to grow disk available to a domU (this on in particular is a file server), I can''t just grow what shows up as hda1, right? I''d need to add a new "disk" and extend a LVM within the domU? Don''t these layers of RAID+LVM (dom0) and Xen block device and LVM (domU) come at a price? I was hoping that the partition got mapped straight into the domU so I avoided any extra stuff within the domU and I would have the added bonus of being able to mount the drives and copy files between when setting this up.> Having said that, if you''re building lots of identical domUs, using prebuilt template (and mappingthe block device as sda1 instead of sda) should be faster than installer.If I can only do partitions, how does this work then? Unless all the drives are identical in size and I ''dd'' the device?? Thanks! -Rob P.S. Sorry, Fahar, I didn''t mean to send directly to you... :-) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I didn''t find a way to directly install on the LV, but it''s possible to install like normal to xvda and just use one partition (one could use more, but then it gets more complex). Then attach another LV and copy the the system over: cp -ax /{bin,boot,etc,home,lib,media,mnt,opt,root,sbin,srv,tmp,usr,var} /mnt/vm4 create dev, proc and sys directories change fstab and grub.conf (to use /dev/sda1 or whatever you name it) disable selinux in /etc/selinux/config xm create the new vm with a config file that fits This is outlined in more detail here: http://www.virtuatopia.com/index.php/Building_a_Xen_Virtual_Guest_Filesyst em_using_Logical_Volume_Management_%28LVM%29 but you don''t have to do all these steps, just what I did. Kai -- Kai Schätzl, Berlin, Germany Get your web at Conactive Internet Services: http://www.conactive.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Ok, so the summary seems to be that images are ok for small stuff (and > tinkering) but I really want to partition it out (I assume this becomes > stronger as the partition grows). RAID 5 may or may not be a good idea.I think you''ll find LVM is a solid choice for running VMs from.> My assumption (correct me here) is that the partition/logical volume I > create for the Xen domU looks like a disk? Then, Xen will partition that > disk out -- and I will create swap, /boot, /, etc partitions.Two choices: 1) Export the storage as a whole disk device to the guest. this is then partitioned within the guest (usually by the guest installer) and the partitions formatted to give something like a disk. 2) As Nick memntioned, it''s also possible to export separate devices / files to a domU to appear as individual partitions to the guest. There is no "whole disk" and the partition table is faked by the virtual disk driver. I''m not sure that 2) is available for fully virtualised (HVM) VMs, I''ve never tried it. It does work for paravirtualised VMs, however. 1) has the advantage of "looking like a real disk". 2) has the advantage that each partition can easily be directly mounted in dom0 (when the guest is shut down safely, or you''ll corrupt things!). Also, when installing the guest, think about whether you want to use LVM there. Some distros (e.g. RH-alikes) use LVM by default. I prefer not to do this in my domUs, so that it''s easier to mount their filesystems in dom0. I don''t know enough about LVM to say how you''d do this otherwise.> Now, to continue on the beginner topics. Using my existing RAID volume + > LVM, I created a logical volume for a file server. I''ve specified it as: > > disk = [ "phy:lvm-raid/FileVolGroup,sda1,w" ] >That syntax is exporting your logical volume as sda1 in the guest; i.e. you''re exporting it as a partition within the guest.> But, apparantly the installer doesn''t see it? What am I missing? I''ve > tried it as /dev/lvm-raid/FileVolGroup. I can mount it in dom0 (and then > umount it of course).Things vary depending on whether you''re using PV or HVM. Also, guest installers vary a bit. You might need to provide more information about what you''re doing if the others suggestions haven''t already answered your question.> I saw a description where they formatted it, so I > tried that and it still isn''t seen. Do I need to go at it via the > /dev/mapper/lvm--raid-FileVolGroup? Something else? > > This has gotta be basic, so basic I''m not finding anything. :-pSounds like you''re progressing OK to me! Cheers, Mark -- Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> > In RHEL 5 at least, anaconda requires the block device/file to be mapped > > as disk (e.g hda, sda) not partition (e.g. sda1). Try changing it to > > disk = [ "phy:lvm-raid/FileVolGroup,sda,w" ] > > Ok, so (at least in CentOS/RHEL?) the Xen domU sees the dom0 partition not > as a partition but as a physical disk?That depends on whether you put sda (whole disk) or sda1 (partition) in the config quoted above. That''s a standard Xen thing, not specific to CentOS/RHEL. What I believe Fajar was saying is that the *installer* used by CentOS/RHEL guests will be unhappy if you try and provide it with separate partitions instead of a whole disk, which limits what you can do.> sda1, sda, and hda all failed > (sda1, sda were not visible while hda kept generating installer errors > detecting the drives). hda1 shows up as weirdly named disk (/dev/hda1) > that I need to partition out in the domU installer (ie, /dev/hda11, > /dev/hda12, etc).Wow. That''s ... different to normal! I think we can safely say your installer didn''t like that, then!> disk = [ "phy:lvm-raid/FileVolGroup,hda1,w", > "phy:lvm-raid/io-swap,hda2,w" ] # assumption was that these > were partitions...They usually are, but it seems your guest''s installer is expecting only a whole virtual block device exported to it, not separate partitions. I think that''s probably resulting in the weird behaviour described above. Another thing: assuming you''re installing a PV guest, try using xvda instead of hda. I''m not sure if this will solve your problem with the separate partitions, but if you end up falling back to using a whole device it should avoid the installer complaining so much!> What concerns me is that if/when I want to grow disk available to a domU > (this on in particular is a file server), I can''t just grow what shows up > as hda1, right? I''d need to add a new "disk" and extend a LVM within the > domU?Alternatively you could extend the whole drive then resize the partitions within it.> Don''t these layers of RAID+LVM (dom0) and Xen block device and LVM (domU) > come at a price? I was hoping that the partition got mapped straight into > the domU so I avoided any extra stuff within the domU and I would have the > added bonus of being able to mount the drives and copy files between when > setting this up.Indeed. Xen supports this, your guest OS installer may not. It''s probably possible to "persuade" your guest OS to run on per-partition virtual devices if you really want that but it would require a bit of extra fiddling. I believe you can access partitions within an arbitary block device using the kpartx tool, which may be useful for poking into a guest''s virtual disk.> > Having said that, if you''re building lots of identical domUs, using > > prebuilt template (and mappingthe block device as sda1 instead of sda) > > should be faster than installer. > > If I can only do partitions, how does this work then? Unless all the > drives are identical in size and I ''dd'' the device??I think you''d want to use a pre-installed template guest OS (possibly having made modifications to make it run as you want) and then you''d copy it to create new ones instead of doing an install+customise. You''d dd to duplicate contents to other block devices in dom0, or cp to copy a file-based VBD. Cheers, Mark -- Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/) _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Rob Greene wrote:>> On Tue, Apr 29, 2008 at 10:15 PM, Fajar A. Nugraha <fajar@fajar.net> wrote: >> >> In RHEL 5 at least, anaconda requires the block device/file to be mapped as disk (e.g hda, sda) not partition (e.g. sda1). Try changing it to >> disk = [ "phy:lvm-raid/FileVolGroup,sda,w" ] >> >> > > Ok, so (at least in CentOS/RHEL?) the Xen domU sees the dom0 > partition not as a partition but as a physical disk?Mark''s post put this in a well-phrased statement : "the *installer* used by CentOS/RHEL guests will be unhappy if you try and provide it with separate partitions instead of a whole disk" It''s redhat installer issue, not a Xen issue. In other words, if you want to use redhat installer to setup domU, you should map it as a disk, not partition.> sda1, sda, and > hda all failedTry xvda. That''s the default device name used when you use virt-manager.> What concerns me is that if/when I want to grow disk available to a > domU (this on in particular is a file server), I can''t just grow what > shows up as hda1, right? I''d need to add a new "disk" and extend a > LVM within the domU? >Yup.> Don''t these layers of RAID+LVM (dom0) and Xen block device and LVM > (domU) come at a price?Around 3% performance penalty, I believe.> I was hoping that the partition got mapped > straight into the domU so I avoided any extra stuff within the domU > and I would have the added bonus of being able to mount the drives and > copy files between when setting this up. > >The "recommended" way, if you want to use virt-manager and redhat installer, is to have LVM on domU side only. Meaning : - you map partitions or disks (not LVM) on dom0 as whole disk on domU - the installer will setup LVM (in domU) on that disk - to extend domU''s fs, map another partition/disk to domU as whole disk, and use LVM on domU to extend the VG and LV>> Having said that, if you''re building lots of identical domUs, using prebuilt template (and mappingthe block device as sda1 instead of sda) should be faster than installer. >> > > If I can only do partitions, how does this work then? Unless all the > drives are identical in size and I ''dd'' the device?? >Not necessarily :) You can always use tar to copy the files. I have prebuilt images of RHEL4 and RHEL5 domUs. To create a domU, what I did was : - create two LVMs on domU : e.g. testrootlv and testswaplv - initialize the LVMs : mkfs and mkswap - create Xen config file, which is something like =========================memory = "250" disk = [ ''phy:/dev/vg/testrootlv,hda1,w'',''phy:/dev/vg/testswaplv,hda2,w'' ] vif = [ ''mac=00:16:3E:23:3A:51, bridge=br105'', ] bootloader="/usr/bin/pygrub" ========================= Note that in "vif" line, I specify MAC address and bridge (I setup network bridges manually, but that''s another story). You should always define a static, unique MAC address for each domU for production purposes. The partition names can be or xvda1 or xvda2 if you want. - populate root fs : mount /dev/vg/testrootlv && (wget -O - http://location_of_install_images | tar xvfz -) && umount /mnt/tmp - startup domU. In my setup in always start with eth0 disabled, empty root password - edit /etc/sysconfig/network and /etc/sysconfig/network-scripts/ifcfg-eth0, set root password - reboot domU The entire process took about five minutes. You can convert an installed domU (created using installer) to a template quite easily, but I wont cover it here. Kai''s post shows one way to do it. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Kai Schaetzl wrote:> I didn''t find a way to directly install on the LV, but it''s possible to > install like normal to xvda and just use one partition (one could use > more, but then it gets more complex). Then attach another LV and copy the > the system over: > cp -ax /{bin,boot,etc,home,lib,media,mnt,opt,root,sbin,srv,tmp,usr,var} > /mnt/vm4 > > create dev, proc and sys directories > change fstab and grub.conf (to use /dev/sda1 or whatever you name it) > disable selinux in /etc/selinux/config > xm create the new vm with a config file that fits >Don''t forget to change the network configurations, as well, and set a new MAC address in your Xen configuration files. Or the new image will much with the network connections of your old image. Unfortunately, RHEL''s installation tools and CD image do not deal well with running ''grub-install'' from their CD rescue images. You have to do something clever like duplicate the /boot partition and grub boot loader with tools like ''dd'', and keep /boot as your tiny first partition, if you intend to use the pygrub tool normally used with RHEL for booting Xen images. Otherwise you have to keep a copy of the guest kernel on your server: this is a *NASTY* problem if you use the Xensource kernels, which traditionally had the same kernel name for RHEL 4 and RHEL 5 kernels.> This is outlined in more detail here: > http://www.virtuatopia.com/index.php/Building_a_Xen_Virtual_Guest_Filesyst > em_using_Logical_Volume_Management_%28LVM%29 > but you don''t have to do all these steps, just what I did. > > Kai > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Rob Greene wrote on Wed, 30 Apr 2008 08:40:28 -0500:> Ok, so (at least in CentOS/RHEL?) the Xen domU sees the dom0 partition not > as a partition but as a physical disk? sda1, sda, and hda all failed (sda1, > sda were not visible while hda kept generating installer errors detecting > the drives).That fails because Xen presents the disk/partition as xvda, not as sda. You should be able to use two partitions in the same way you use two disks: I would think that they are presented to the installer as xvda and xvdb. This will nevertheless result in a Xen virtual disk image on that partition/lv and not as a "raw" file system that you could mount directly in Dom0.> disk = [ "phy:lvm-raid/FileVolGroup,hda1,w", > "phy:lvm-raid/io-swap,hda2,w" ] # assumption was that these > were partitions...Shouldn''t this be /dev/lvm-raid/FileVolGroup assuming that this is a logical volume in your Dom0? Btw, you don''t need any swap partition. If you want one you can attach it later at any time. Kai -- Kai Schätzl, Berlin, Germany Get your web at Conactive Internet Services: http://www.conactive.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Nico Kadel-Garcia wrote on Fri, 02 May 2008 09:10:03 +0100:> Don''t forget to change the network configurations, as well, and set a > new MAC address in your Xen configuration files. Or the new image will > much with the network connections of your old image.That''s the normal measure you have to take with any duplication, I was merely outlining the conversion process ;-)> > Unfortunately, RHEL''s installation tools and CD image do not deal well > with running ''grub-install'' from their CD rescue images. You have to do > something clever like duplicate the /boot partition and grub boot loader > with tools like ''dd'', and keep /boot as your tiny first partition, if > you intend to use the pygrub tool normally used with RHEL for booting > Xen images. Otherwise you have to keep a copy of the guest kernel on > your server: this is a *NASTY* problem if you use the Xensource kernels, > which traditionally had the same kernel name for RHEL 4 and RHEL 5 kernels.Not sure what you mean by that. If you look at the original tutorial I followed you see that it does *not* use a /boot directory on the DomU. That would imply that the DomU actually starts with a kernel on Dom0. I tested this on my test case by renaming the /boot directory in the DomU. Then the boot failed. So, it''s surely using that kernel and boot directory and not any from Dom0. I did not have to use grub-install at all - for what should I use it? Kai -- Kai Schätzl, Berlin, Germany Get your web at Conactive Internet Services: http://www.conactive.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha wrote on Fri, 02 May 2008 10:01:45 +0700:> Mark''s post put this in a well-phrased statement : "the *installer* used > by CentOS/RHEL > guests will be unhappy if you try and provide it with separate partitions > instead of a whole disk"Which OS did you use to install a DomU on the "raw" partiton/lv (and not using xvda)? And which installer tool? Kai -- Kai Schätzl, Berlin, Germany Get your web at Conactive Internet Services: http://www.conactive.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Sorry about not getting back earlier, but Thursday was a late night of installation madness for me (at work, unfortunately, not at home!). Anyway, I''m running CentOS 5.1 x86_64 + Xen 3.1, so it sounds (and from my experiments) like whatever I do, I''m going to get a logical disk for the domU instead of a mounted file system. I did notice a new option (root =) in the config file of the Virtutopia link Kai posted. As far as my installer, that would be ''vi'' and ''xm create''! I started with the visual one (forgot the name) and haven''t used virt-manager (?) at all. I figure once I have a base config, laying out the disk isn''t a problem and then I copy the config tweak the network or whatever and go from there. Unless you mean the domU Linux installer, and that would be Anaconda. What I found interesting is that if I allocate something in dom0, format it, and then add it as sda3/sdb1 for my domU, ''fdisk -l'' reports it as having an invalid partition table. So, maybe it''s not the installer after all... and that what comes with CentOS only takes a disk. Now I just need to play with the RAID 5 config and see if it''s ok for me or not. :-) Thanks! -Rob P.S. Kai - The only thing other than xvda that works was sda1 (which still is a disk, so the partitions get goofy). _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users