i all, I had success installing and running Windows as a domU on Ubuntu amd64 / Xen 3.0.2-2 using an image file: disk = [ ''file:/vserver/xen-images/winxp.img,ioemu:hda,w'' ] I have another disk where I installed Windows directly (not using Xen) and I would like to run it as a domU from its own disk and partition. Is it possible? Here is my configuration: BIOS boot order: 1. cdrom (/dev/hdb) 2. SATA disk (/dev/sda) 3. EIDE disk (/dev/hda) Grub menu (at /dev/sda): Ubuntu 6.06 amd64 with kernel 2.6.15.23 root=/dev/sda1 Xen 3.0.2 with kernel 2.6.16.13-xen root=/dev/sda1 Windows XP map (hd0,0) (hd1,0) map (hd1,0) (hd0,0) rootnoverify (hd1,0) I tried with: disk = [ ''phy:hda1,hda1,w'' ] cdrom=''/dev/hdb'' boot=''c'' But it boots from cdrom, not from hard disk. Am I missing some configuration variable? -- Z24 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
(cc to xen-list)>> Who gives HVM support, the "ioemu:" string? >xen give the HVM support >there are two kinds of installation method for guest OS >1 common config > seems like yours > >2 HVM support config >/etc/xen/xmexample.hvm > >try to use the second method,I am using xmexample.hvm: kernel = "/usr/lib/xen/boot/hvmloader" builder=''hvm'' memory = 512 name = "winxp" vif = [ ''type=ioemu, bridge=xenbr0'' ] disk = ... see below on_poweroff = ''destroy'' on_reboot = ''destroy'' on_crash = ''destroy'' device_model = ''/usr/'' + arch_libdir + ''/xen/bin/qemu-dm'' cdrom=''/dev/hdb'' boot=''c'' sdl=1 vnc=0 vncviewer=0 stdvga=0 serial=''pty'' ne2000=0 The original disk variable was: #disk = [ ''phy:hda1,hda1,r'' ] disk = [ ''file:/var/images/min-el3-i386.img,ioemu:hda,w'' ] (I summarize for the list) I would like to run a natively installed Windows as a domU from its own partition (primary partition of an EIDE disk set as the second disk in the bios boot order). If I don''t use "ioemu:", as in the example.hvm commented line, I get "Could not read the boot disk". I get something work but hanging with these values: - phy:/dev/hda1,ioemu:hda qemu sees the hard disk, tries to boot: "ata0 master: QEMU HARDDISK ATA-2 (23838 MBytes). Booting from hard disk" but it hangs up there. domU state is ------. - phy:/dev/hda,ioemu:hda It reports booting from hard disk, then displays Windows bootloader menu (Normal or Safe mode), I choose one and Windows hangs up. domU state is r-----. - phy:hda1,ioemu:hda phy:hda,ioemu:hda Xen says "Started domain winxp" but domU window is not opened. domU state is -b----. These are all the tries I did: phy:hda1 ,hda1 Opens domU but "Could not read the boot disk" phy:/dev/hda1 ,hda1 Opens domU but "Could not read the boot disk" phy:hda1 ,ioemu:hda1 "Error: hvm: for qemu vbd type=file&dev=hda~hdd". No domU started. phy:/dev/hda1 ,ioemu:hda1 "Error: hvm: for qemu vbd type=file&dev=hda~hdd". No domU started. phy:hda1 ,hda Opens domU but "Could not read the boot disk" phy:/dev/hda1 ,hda Opens domU but "Could not read the boot disk" phy:hda1 ,ioemu:hda "Started domain winxp" but no windows opened. State is -b----. phy:/dev/hda1 ,ioemu:hda "ata0 master: QEMU HARDDISK ATA-2 (23838 MBytes). Booting from hard disk" but hangs up. State is ------. phy:hda ,hda Opens domU but "Could not read the boot disk" phy:/dev/hda ,hda Opens domU but "Could not read the boot disk" phy:hda ,ioemu:hda "Started domain winxp" but no windows opened. State is -b----. phy:/dev/hda ,ioemu:hda Boots from hd, displays Win bootloader menu but hangs up. r-----. phy:hda ,hda1 Opens domU but "Could not read the boot disk" phy:/dev/hda ,hda1 Opens domU but "Could not read the boot disk" phy:hda ,ioemu:hda1 "Error: hvm: for qemu vbd type=file&dev=hda~hdd". No domU started. phy:/dev/hda ,ioemu:hda1 "Error: hvm: for qemu vbd type=file&dev=hda~hdd". No domU started. Where is the problem? In Xen, in Qemu, in Windows wanting to be the first disk? -- Z24 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ligesh
2006-Aug-23 08:49 UTC
[Xen-users] Differences in performance between file and LVM based images.
Hello folks, How much is the performance penalty if you use a file instead of an LVM? Has there been any benchmarks on this? Thanks. -- :: Ligesh :: http://ligesh.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Petersson, Mats
2006-Aug-23 10:33 UTC
RE: [Xen-users] Differences in performance between file and LVM based images.
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Ligesh > Sent: 23 August 2006 09:50 > To: xen-users@lists.xensource.com > Subject: [Xen-users] Differences in performance between file > and LVM based images. > > > Hello folks, > > How much is the performance penalty if you use a file > instead of an LVM? Has there been any benchmarks on this?I haven''t done any benchmarks, but I would like to point out that it would depend VERY MUCH on exactly what the application running on top of the VM is. -- Mats> > Thanks. > > -- > :: Ligesh :: http://ligesh.com > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ligesh
2006-Aug-23 15:48 UTC
[Xen-users] Re: Differences in performance between file and LVM based images.
"Petersson, Mats" <Mats.Petersson@amd.com> said.> > > > Hello folks, > > > > How much is the performance penalty if you use a file > > instead of an LVM? Has there been any benchmarks on this? > > I haven''t done any benchmarks, but I would like to point out that it > would depend VERY MUCH on exactly what the application running on top of > the VM is. >And how will that depend on the application? Disk access is pretty much about the read/write/seek times right? Do you mean that there are some applications that will be faster on file based disks, and some others that faster on LVMs? Thanks a lot for your time. -- :: Ligesh :: http://ligesh.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Petersson, Mats
2006-Aug-23 15:52 UTC
RE: [Xen-users] Re: Differences in performance between file and LVM based images.
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Ligesh > Sent: 23 August 2006 16:49 > To: Petersson, Mats > Cc: xen-users@lists.xensource.com > Subject: [Xen-users] Re: Differences in performance between > file and LVM based images. > > > "Petersson, Mats" <Mats.Petersson@amd.com> said. > > > > > > Hello folks, > > > > > > How much is the performance penalty if you use a file > > > instead of an LVM? Has there been any benchmarks on this? > > > > I haven''t done any benchmarks, but I would like to point out that it > > would depend VERY MUCH on exactly what the application > running on top of > > the VM is. > > > > And how will that depend on the application? Disk access is > pretty much about the read/write/seek times right? Do you > mean that there are some applications that will be faster on > file based disks, and some others that faster on LVMs?Probably yes. How much? Don''t know. But I was more referring to the fact that different applications do different things to disks in the first place, so the application behaviour may depend on "seek time" or "write time" or "read time" in different proportions [1], so just using "hdparm" or something like that wouldn''t really be a useful measure of how some particular application will perform on any given setup. If you do make some measurements to compare different setups (preferrably with some different benchmarks that all depend on disk performance), I''d be very interested to see the results. Note also that there is a new interface for file-based IO called blktap, which I believe is a bit better than the old style block device backend driver... -- Mats> > Thanks a lot for your time. > > -- > :: Ligesh :: http://ligesh.com > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ligesh
2006-Aug-23 17:01 UTC
[Xen-users] Re: Differences in performance between file and LVM based images.
On Wed, Aug 23, 2006 at 05:52:13PM +0200, Petersson, Mats wrote:> > Probably yes. How much? Don''t know. > > But I was more referring to the fact that different applications do > different things to disks in the first place, so the application > behaviour may depend on "seek time" or "write time" or "read time" in > different proportions [1], so just using "hdparm" or something like that > wouldn''t really be a useful measure of how some particular application > will perform on any given setup. >Now that I have given it some thought, it seems to me that there''s going to be some performance issues with files, and it might even be severe. For each seek, the control has to go through the ext3 driver, which is the only guy who knows how the file is structured. So if you are doing a seek on loop device, the OS needs the help of the ext3 driver to translate this into a position inside the file. The steps involved in doing a an operation on a loop device would be: 1) Linux will have to first locate the file on the main filesystem. (Or does linux use the file''s harddisk postion as the identifier for the loop device?...). 2) Then it has to find out how the file is structured in the harddisk. 3) To read/write anything, it will again need the entire ext3 logic for file structure. Anyway, some benchmarks would be great. And I think this should be explicitly mentioned in the documentation. Primary purpose of virtualization is to squeeze the maximum out of hardware, and so we cannot really afford performance penalties arising from wrong implementation decisions. I will see if I can do some benchmarks. Thanks. -- :: Ligesh :: http://ligesh.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jeff Lane
2006-Aug-23 17:03 UTC
Re: [Xen-users] Re: Differences in performance between file and LVM based images.
For what its worth, it should be fairly obvious also that using files for the domUs becomes increasingly poor in performance as you add domUs. For an extreme case of this, I had 60 domUs going on a 4 node IBM x460 under SLES10/Xen. All the DomUs used files for their filesystems and all of those were stored on a multiple disk LVM volume that was spread across three of the nodes. All 60 of the DomUs were bootable and did in fact run, BUT the access time was so slow that just logging in to one of them took a considerable amount of time. I have also observed that the type of disks you are using as storage space can greatly effect the apparent performance. Good example was running 6 domUs using files instead of physical disks on a SATA RAID array vs running the same things off a SCSI RAID array using 15K RPM disks... Personally, I kinda expected that result, and just for the grain of salt factor, I did not use any real benchmarking to observe this, this is all just my observations of the usability of the domUs in various storage configurations... Also, these were all paravirtualized domUs... no full virtualization was done... On 8/23/06, Ligesh <myself@ligesh.com> wrote:> On Wed, Aug 23, 2006 at 05:52:13PM +0200, Petersson, Mats wrote: > > > > Probably yes. How much? Don''t know. > > > > But I was more referring to the fact that different applications do > > different things to disks in the first place, so the application > > behaviour may depend on "seek time" or "write time" or "read time" in > > different proportions [1], so just using "hdparm" or something like that > > wouldn''t really be a useful measure of how some particular application > > will perform on any given setup.-- ------------------> Jeffrey Lane - W4KDH <------------------- www.jefflane.org Another cog in the great Corporate Wheel The internet has no government, no constitution, no laws, no rights, no police, no courts. Don''t talk about fairness or innocence, and don''t talk about what should be done. Instead, talk about what is being done and what will be done by the amorphous unreachable undefinable blob called "the internet user base." -Paul Vixie _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Petersson, Mats
2006-Aug-23 17:07 UTC
[Xen-users] RE: Differences in performance between file and LVM based images.
> -----Original Message----- > From: Ligesh [mailto:myself@ligesh.com] > Sent: 23 August 2006 18:02 > To: Petersson, Mats > Cc: xen-users@lists.xensource.com > Subject: Re: Differences in performance between file and LVM > based images. > > On Wed, Aug 23, 2006 at 05:52:13PM +0200, Petersson, Mats wrote: > > > > Probably yes. How much? Don''t know. > > > > But I was more referring to the fact that different applications do > > different things to disks in the first place, so the application > > behaviour may depend on "seek time" or "write time" or > "read time" in > > different proportions [1], so just using "hdparm" or > something like that > > wouldn''t really be a useful measure of how some particular > application > > will perform on any given setup. > > > > Now that I have given it some thought, it seems to me that > there''s going to be some performance issues with files, and > it might even be severe. For each seek, the control has to go > through the ext3 driver, which is the only guy who knows how > the file is structured. So if you are doing a seek on loop > device, the OS needs the help of the ext3 driver to translate > this into a position inside the file. > > The steps involved in doing a an operation on a loop device would be: > > 1) Linux will have to first locate the file on the main > filesystem. (Or does linux use the file''s harddisk postion as > the identifier for the loop device?...).Finding the file should only happen ONCE when it''s being opened (whether this happens in a loop-back filesystem driver or otherwise... So unless your starting and stopping the virtual machine or mounting/unmouting a device, it''s not really relevant to the overhead of using files vs. some other storage for virtual disks.> > 2) Then it has to find out how the file is structured in the > harddisk.Yes, that''s true.> > 3) To read/write anything, it will again need the entire > ext3 logic for file structure.Yes. But hopefully this is fairly efficient. And it''s quite feasible for the EXT3 system to cache data regarding where things are kept and how to get at it quickly. There''s certainly overhead with using a file-system based virtual disk, but I don''t think it''s a HUGE overhead.> > Anyway, some benchmarks would be great. And I think this > should be explicitly mentioned in the documentation. Primary > purpose of virtualization is to squeeze the maximum out of > hardware, and so we cannot really afford performance > penalties arising from wrong implementation decisions.Yes, but my original point is that it''s not going to be straightforward to say under which circumstance the administrator should choose what. I also don''t agree that virtualization is (always) about squeezing the most performance out of the hardware. It is use-case for virtualization [for example merging multiple servers into one physical machine], but there are many other use-cases where other features given by virtualization is much more important. One of those would be security and the ability to migrate a guest from one physical machine to another.> > I will see if I can do some benchmarks.Please do.> > > Thanks. > > -- > :: Ligesh :: http://ligesh.com > > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
JHJE (Jan Holst Jensen)
2006-Aug-23 18:54 UTC
RE: [Xen-users] Re: Differences in performance between file and LVMbased images.
> Now that I have given it some thought, it seems to me that > there''s going to be some performance issues with files, and > it might even be severe. For each seek, the control has to goFWIW I started out using file-backed domUs and had really bad performance in dom0(!). I saw huge memory consumption and much activity in dom0 and my understanding is that dom0 was spending an awful lot of time managing file cache for all the open backing files for the various domUs. dom0 was quite some time responding when I wanted to log in to it so that''s how I noticed. This was with 5 domUs with low I/O load. I switched to LVM and has had good performance since then. This was on Xen 2.0.6 - never tried file-backing in Xen 3. Cheers -- Jan Holst Jensen, Novo Nordisk A/S, Denmark _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ligesh
2006-Aug-23 19:05 UTC
[Xen-users] Re: Differences in performance between file and LVM based images.
On Wed, Aug 23, 2006 at 07:07:05PM +0200, Petersson, Mats wrote:> > > > > > Anyway, some benchmarks would be great. And I think this > > should be explicitly mentioned in the documentation. Primary > > purpose of virtualization is to squeeze the maximum out of > > hardware, and so we cannot really afford performance > > penalties arising from wrong implementation decisions. > > I also don''t agree that virtualization is (always) about squeezing the > most performance out of the hardware. It is use-case for virtualization > [for example merging multiple servers into one physical machine], but > there are many other use-cases where other features given by > virtualization is much more important. One of those would be security > and the ability to migrate a guest from one physical machine to another.With LVMs the only issue is in setting it up. You will need do some jumping through hoops to get / on the LVM. But after that it is as flexible, and sometimes even easier to manage than plain files. So given that they are similar in terms of all other functionalities, to me it makes sense to use LVMs, since the performance improvement for LVMs comes without any other drawbacks. ''Squeezing the max out of hardware'' sounds cheap. What I meant was ''improving harware utilization'', which I think is the declared aim of xen, as seen on xensource.com. Anyway, whatever be the scenario, creating unnecessary overheads is a bad idea. So maybe everyone can standardize on LVMs. The only problem is that DC''s by default don''t have LVMs, but as it becomes more popular, this would also be a non-issue. Having a small ''/boot'' and configure everything else as an LV seems a great idea to me, especially if it can be done at boot time. That is generally a good idea too, and not just for xen. Thanks. -- :: Ligesh :: http://ligesh.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
JHJE (Jan Holst Jensen)
2006-Aug-23 19:20 UTC
RE: [Xen-users] Re: Differences in performance between file and LVMbased images.
> With LVMs the only issue is in setting it up. You will need > do some jumping through hoops to get / on the LVM. But after > that it is as flexible, and sometimes even easier to manage > than plain files. So given that they are similar in terms of > all other functionalities, to me it makes sense to use LVMs, > since the performance improvement for LVMs comes without any > other drawbacks.I totally agree. To make life easier for myself configuration-wise I have ''/'' and all other dom0 stuff on a normal SCSI (RAID) block device. domUs are however served from other (logical) disks managed by LVM as a single volume group. sc31:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/cciss/c0d0p2 12G 1,5G 9,1G 14% / tmpfs 65M 0 65M 0% /dev/shm /dev/cciss/c0d0p1 250M 31M 206M 13% /boot /dev/cciss/c0d0p7 45G 33M 43G 1% /home /dev/cciss/c0d0p3 7,4G 33M 7,0G 1% /tmp /dev/cciss/c0d1p1 67G 39G 26G 61% /home/xen/staging sc31:~# vgs VG #PV #LV #SN Attr VSize VFree vm-disks 1 20 0 wz--n 136,72G 11,23G sc31:~# Cheers -- Jan Holst Jensen, Novo Nordisk A/S, Denmark _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fabian Holler
2006-Aug-24 11:33 UTC
Re: [Xen-users] Running an already installed Windows as domU
Hello Z24, On 16.08.2006 16:09, Z24 wrote:> I have another disk where I installed Windows directly (not using Xen) > and I would like to run it as a domU from its own disk and partition. > Is it possible?If you get it to work, please write how you did it to this mailinglist. I''m very interesting in it, too :) It would be great to import existing Windows installation into XEN. thank you greetings Fabian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Alex Iribarren
2006-Aug-24 13:57 UTC
Re: [Xen-users] Differences in performance between file and LVM based images
Hi all, Nobody seems to want to do these benchmarks, so I went ahead and did them myself. The results were pretty surprising, so keep reading. :) -- Setup -- Hardware: 2x 3GHz Intel Woodcrest (dual core), Intel S5000PAL, 1x SATA Western Digital WD1600YD-01N 160GB, 8GB RAM (dom0 using 2G) Dom0 and DomU: Gentoo/x86/2006.0, gcc-3.4.6, glibc-2.3.6-r4, 2.6.16.26-xen i686, LVM compiled as a module IOZone version: 3.242 Contents of VM config file: name = "gentoo"; memory = 1024; vcpus = 4; kernel = "/boot/vmlinuz-2.6.16.26-xenU"; builder = "linux"; disk = [ ''phy:/dev/xenfs/gentoo,sda1,w'', ''phy:/dev/xenfs/test,sdb,w'', ''file:/mnt/floppy/testdisk,sdc,w'' ]; root = "/dev/sda1 rw"; #vif = [ ''mac=aa:00:3e:8a:00:61'' ]; vif = [ ''mac=aa:00:3e:8a:00:61, bridge=xenbr0'' ]; dhcp = "dhcp"; -- Procedure -- I created a partition, an LVM volume and a file, all of aprox. 1GB, and I created ext3 filesystems on them with the default settings. I then ran IOZone from dom0 on all three "devices" to get the reference values. I booted my domU with the LVM and file exported and reran IOZone. All filesystems were recreated before running the benchmark. Dom0 was idle while domU was running the benchmark, and there were no VMs running while I ran the benchmark on dom0. IOZone was run with the following command line: iozone -KoMe -s900m -r256k -i0 -i1 -i2 -f <file to test> This basically means that we want to run the test on a 900MB file using 256k as the record size. We want to test sequential write and rewrite (-i0), sequential read and reread (-i1) and random write and read (-i2). We want to get some random accesses (-K) during testing to make this a bit more real-life. Also, we want to use synchronous writes (-o) and take into account buffer flushes (-M). -- Results -- The first three entries (* control) are the results for the benchmark from dom0, so they give an idea of expected "native" performance (Part. control) and the performance of using LVM or loopback devices. The last two entries are the results as seen from within the domU. "Device" Write Rewrite Read Reread dom0 Part. 32.80 MB/s 35.92 MB/s 2010.32 MB/s 2026.11 MB/s dom0 LVM 43.42 MB/s 51.64 MB/s 2008.92 MB/s 2039.40 MB/s dom0 File 55.25 MB/s 65.20 MB/s 2059.91 MB/s 2052.45 MB/s domU Part. 31.29 MB/s 34.85 MB/s 2676.16 MB/s 2751.57 MB/s domU LVM 40.97 MB/s 47.65 MB/s 2645.21 MB/s 2716.70 MB/s domU File 241.24 MB/s 43.58 MB/s 2603.91 MB/s 2684.58 MB/s "Device" Random read Random write dom0 Part. 2013.73 MB/s 26.73 MB/s dom0 LVM 2011.68 MB/s 32.90 MB/s dom0 File 2049.71 MB/s 192.97 MB/s domU Part. 2723.65 MB/s 25.65 MB/s domU LVM 2686.48 MB/s 30.69 MB/s domU File 2662.49 MB/s 51.13 MB/s According to these numbers, file-based filesystems are generally the fastest of the three alternatives. I''m having a hard time understanding how this can possibly be true, so I''ll let the more knowledgeable members of the mailing list enlighten us. My guess is that the extra layers (LVM/loopback drivers/Xen) are caching stuff and ignoring IOZone when it tries to write synchronously. Regardless, it seems like file-based filesystems are the way to go. Too bad, I prefer LVMs... Cheers, Alex _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Petersson, Mats
2006-Aug-24 14:06 UTC
RE: [Xen-users] Differences in performance between file and LVM based images
Alex, Can I first say "Thanks for doing this, and for sharing". More comments below.> -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of > Alex Iribarren > Sent: 24 August 2006 14:58 > To: xen-users@lists.xensource.com > Subject: Re: [Xen-users] Differences in performance between > file and LVM based images > > Hi all, > > Nobody seems to want to do these benchmarks, so I went ahead and did > them myself. The results were pretty surprising, so keep reading. :) > > -- Setup -- > Hardware: 2x 3GHz Intel Woodcrest (dual core), Intel S5000PAL, 1x SATA > Western Digital WD1600YD-01N 160GB, 8GB RAM (dom0 using 2G) > Dom0 and DomU: Gentoo/x86/2006.0, gcc-3.4.6, glibc-2.3.6-r4, > 2.6.16.26-xen i686, LVM compiled as a module > IOZone version: 3.242 > Contents of VM config file: > name = "gentoo"; > memory = 1024; > vcpus = 4; > > kernel = "/boot/vmlinuz-2.6.16.26-xenU"; > builder = "linux"; > > disk = [ ''phy:/dev/xenfs/gentoo,sda1,w'', ''phy:/dev/xenfs/test,sdb,w'', > ''file:/mnt/floppy/testdisk,sdc,w'' ]; > root = "/dev/sda1 rw"; > > #vif = [ ''mac=aa:00:3e:8a:00:61'' ]; > vif = [ ''mac=aa:00:3e:8a:00:61, bridge=xenbr0'' ]; > dhcp = "dhcp"; > > > -- Procedure -- > I created a partition, an LVM volume and a file, all of > aprox. 1GB, and > I created ext3 filesystems on them with the default settings. > I then ran > IOZone from dom0 on all three "devices" to get the reference values. I > booted my domU with the LVM and file exported and reran IOZone. All > filesystems were recreated before running the benchmark. Dom0 was idle > while domU was running the benchmark, and there were no VMs running > while I ran the benchmark on dom0. > > IOZone was run with the following command line: > iozone -KoMe -s900m -r256k -i0 -i1 -i2 -f <file to test> > This basically means that we want to run the test on a 900MB > file using > 256k as the record size. We want to test sequential write and rewrite > (-i0), sequential read and reread (-i1) and random write and > read (-i2). > We want to get some random accesses (-K) during testing to make this a > bit more real-life. Also, we want to use synchronous writes (-o) and > take into account buffer flushes (-M). > > -- Results -- > The first three entries (* control) are the results for the benchmark > from dom0, so they give an idea of expected "native" > performance (Part. > control) and the performance of using LVM or loopback > devices. The last > two entries are the results as seen from within the domU. > > "Device" Write Rewrite Read Reread > dom0 Part. 32.80 MB/s 35.92 MB/s 2010.32 MB/s 2026.11 MB/s > dom0 LVM 43.42 MB/s 51.64 MB/s 2008.92 MB/s 2039.40 MB/s > dom0 File 55.25 MB/s 65.20 MB/s 2059.91 MB/s 2052.45 MB/s > domU Part. 31.29 MB/s 34.85 MB/s 2676.16 MB/s 2751.57 MB/s > domU LVM 40.97 MB/s 47.65 MB/s 2645.21 MB/s 2716.70 MB/s > domU File 241.24 MB/s 43.58 MB/s 2603.91 MB/s 2684.58 MB/s > > "Device" Random read Random write > dom0 Part. 2013.73 MB/s 26.73 MB/s > dom0 LVM 2011.68 MB/s 32.90 MB/s > dom0 File 2049.71 MB/s 192.97 MB/s > domU Part. 2723.65 MB/s 25.65 MB/s > domU LVM 2686.48 MB/s 30.69 MB/s > domU File 2662.49 MB/s 51.13 MB/s > > According to these numbers, file-based filesystems are generally the > fastest of the three alternatives. I''m having a hard time > understanding > how this can possibly be true, so I''ll let the more knowledgeable > members of the mailing list enlighten us. My guess is that the extra > layers (LVM/loopback drivers/Xen) are caching stuff and > ignoring IOZone > when it tries to write synchronously. Regardless, it seems like > file-based filesystems are the way to go. Too bad, I prefer LVMs...Yes, you''ll probably get file-caching on Dom0 when using file-based setup, which doesn''t happen on other setups. The following would be interesting to also test: 1. Test with noticably larger test-area (say 10GB or so). 2. Test multiple domains simultaneously to see if file-based approach is still the fastest in this approach. 3. Test the new (unstable) Blktap model. -- Mats> > Cheers, > Alex > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra
2006-Aug-24 14:25 UTC
Re: [Xen-users] Differences in performance between file and LVM based images
On Thursday 24 August 2006 9:06 am, Petersson, Mats wrote:> Yes, you''ll probably get file-caching on Dom0 when using file-based > setup, which doesn''t happen on other setups.it''s also interesting to see that even in the ''base'' tests, throughput is higher in LVM than partition, and file is higher than LVM. that sure means that this specific test is benefited by extra levels of abstraction/reordering/caching> The following would be interesting to also test: > 1. Test with noticably larger test-area (say 10GB or so).1GB devices are much bigger then usual units (4M for LVM, 512-4K for files), but with 8GB of RAM, it really looks small. also, recently there have been some questions about minimal RAM form dom0, with 64M-256M common in x86-32 and 256M-512M for x86-64. (and corresponding laments on the increased requirements)> 2. Test multiple domains simultaneously to see if file-based approach is > still the fastest in this approach.this would be really interesting, i would guess that on single drive setups, both LVM and partition would suffer from the longer head seeks. in some cases it might help if the filebacks are heavily fragmented!> 3. Test the new (unstable) Blktap model.Blktap? care to expand on this? -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Roger Lucas
2006-Aug-24 14:31 UTC
RE: [Xen-users] Differences in performance between file and LVMbased images
Hi Alex, May I also express my thanks for these benchmarks, but some are unlikely to be truly representative of the relative disk performances.> > -- Setup -- > > Hardware: 2x 3GHz Intel Woodcrest (dual core), Intel S5000PAL, 1x SATA > > Western Digital WD1600YD-01N 160GB, 8GB RAM (dom0 using 2G)According to WD''s site, your HDD maximum _sustained_ read or write performance is 61MB/s. You may get more if you hit caches on read or write, but if you get numbers bigger than 61MB/s on data that is not expected to be in the cache (e.g. a properly constructed disk benchmark) then I would be suspicious (unless you are deliberately trying to test the disk caching performance). <snip>> > > > -- Results -- > > The first three entries (* control) are the results for the benchmark > > from dom0, so they give an idea of expected "native" > > performance (Part. > > control) and the performance of using LVM or loopback > > devices. The last > > two entries are the results as seen from within the domU. > > > > "Device" Write Rewrite Read Reread > > dom0 Part. 32.80 MB/s 35.92 MB/s 2010.32 MB/s 2026.11 MB/s > > dom0 LVM 43.42 MB/s 51.64 MB/s 2008.92 MB/s 2039.40 MB/s > > dom0 File 55.25 MB/s 65.20 MB/s 2059.91 MB/s 2052.45 MB/s > > domU Part. 31.29 MB/s 34.85 MB/s 2676.16 MB/s 2751.57 MB/s > > domU LVM 40.97 MB/s 47.65 MB/s 2645.21 MB/s 2716.70 MB/s > > domU File 241.24 MB/s 43.58 MB/s 2603.91 MB/s 2684.58 MB/sThe domU file write at 241.24 MB/s looks more than slightly suspicious since your disk can only do 61MB/s. I suspect that the writes are being cached in the dom0 (because you have lots of RAM) and distorting the true disk access speeds. You have 2GB of ram in Dom0 and your test is only 900MB, so it is possible that the writes are being completely cached in the Dom0. The DomU thinks the write is complete, but all that has happened is the data has moved to the Dom0 cache. The read numbers are also way-off as they are at least 30x the disk speed. It is interesting, however, that the read and re-read numbers, which are must be coming from cache somewhere rather than from disk, show that partition, LVM and file are very comparable.> > > > "Device" Random read Random write > > dom0 Part. 2013.73 MB/s 26.73 MB/s > > dom0 LVM 2011.68 MB/s 32.90 MB/s > > dom0 File 2049.71 MB/s 192.97 MB/sThe domU file random write at 192.97 MB/s also looks wrong, for the same reasons as above.> > domU Part. 2723.65 MB/s 25.65 MB/s > > domU LVM 2686.48 MB/s 30.69 MB/s > > domU File 2662.49 MB/s 51.13 MB/s > > > > According to these numbers, file-based filesystems are generally the > > fastest of the three alternatives. I''m having a hard time > > understanding > > how this can possibly be true, so I''ll let the more knowledgeable > > members of the mailing list enlighten us. My guess is that the extra > > layers (LVM/loopback drivers/Xen) are caching stuff and > > ignoring IOZone > > when it tries to write synchronously. Regardless, it seems like > > file-based filesystems are the way to go. Too bad, I prefer LVMs... > > Yes, you''ll probably get file-caching on Dom0 when using file-based > setup, which doesn''t happen on other setups.Absolutely. Hence the over-high readings for file transfers.> > The following would be interesting to also test: > 1. Test with noticably larger test-area (say 10GB or so).You need to run with test files that are at least 5x the available cache memory before you can start to trust the results. Given that you have 2GB of memory on Dom0, 10GB would be the smallest test that makes sense. I would be very interested to see the results from such a test. As one final question, what is the scheduling configuration for Dom0 and DomU with these tests? Have you tried different configurations (period/slice) for the DomU tests to see if it makes any difference? Best regards, Roger _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Petersson, Mats
2006-Aug-24 15:09 UTC
RE: [Xen-users] Differences in performance between file and LVMbased images
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of > Roger Lucas > Sent: 24 August 2006 15:32 > To: Petersson, Mats; ''Alex Iribarren''; xen-users@lists.xensource.com > Subject: RE: [Xen-users] Differences in performance between > file and LVMbased images > > Hi Alex, > > May I also express my thanks for these benchmarks, but some > are unlikely to > be truly representative of the relative disk performances. > > > > -- Setup -- > > > Hardware: 2x 3GHz Intel Woodcrest (dual core), Intel > S5000PAL, 1x SATA > > > Western Digital WD1600YD-01N 160GB, 8GB RAM (dom0 using 2G) > > According to WD''s site, your HDD maximum _sustained_ read or write > performance is 61MB/s. You may get more if you hit caches on > read or write, > but if you get numbers bigger than 61MB/s on data that is not > expected to be > in the cache (e.g. a properly constructed disk benchmark) > then I would be > suspicious (unless you are deliberately trying to test the > disk caching > performance). > > <snip> > > > > > > > -- Results -- > > > The first three entries (* control) are the results for > the benchmark > > > from dom0, so they give an idea of expected "native" > > > performance (Part. > > > control) and the performance of using LVM or loopback > > > devices. The last > > > two entries are the results as seen from within the domU. > > > > > > "Device" Write Rewrite Read > Reread > > > dom0 Part. 32.80 MB/s 35.92 MB/s 2010.32 MB/s > 2026.11 MB/s > > > dom0 LVM 43.42 MB/s 51.64 MB/s 2008.92 MB/s > 2039.40 MB/s > > > dom0 File 55.25 MB/s 65.20 MB/s 2059.91 MB/s > 2052.45 MB/s > > > domU Part. 31.29 MB/s 34.85 MB/s 2676.16 MB/s > 2751.57 MB/s > > > domU LVM 40.97 MB/s 47.65 MB/s 2645.21 MB/s > 2716.70 MB/s > > > domU File 241.24 MB/s 43.58 MB/s 2603.91 MB/s > 2684.58 MB/s > > The domU file write at 241.24 MB/s looks more than slightly > suspicious since > your disk can only do 61MB/s. I suspect that the writes are > being cached in > the dom0 (because you have lots of RAM) and distorting the > true disk access > speeds. You have 2GB of ram in Dom0 and your test is only > 900MB, so it is > possible that the writes are being completely cached in the > Dom0. The DomU > thinks the write is complete, but all that has happened is > the data has > moved to the Dom0 cache. > > The read numbers are also way-off as they are at least 30x > the disk speed. > > It is interesting, however, that the read and re-read > numbers, which are > must be coming from cache somewhere rather than from disk, show that > partition, LVM and file are very comparable. > > > > > > > "Device" Random read Random write > > > dom0 Part. 2013.73 MB/s 26.73 MB/s > > > dom0 LVM 2011.68 MB/s 32.90 MB/s > > > dom0 File 2049.71 MB/s 192.97 MB/s > > The domU file random write at 192.97 MB/s also looks wrong, > for the same > reasons as above. > > > > domU Part. 2723.65 MB/s 25.65 MB/s > > > domU LVM 2686.48 MB/s 30.69 MB/s > > > domU File 2662.49 MB/s 51.13 MB/s > > > > > > According to these numbers, file-based filesystems are > generally the > > > fastest of the three alternatives. I''m having a hard time > > > understanding > > > how this can possibly be true, so I''ll let the more knowledgeable > > > members of the mailing list enlighten us. My guess is > that the extra > > > layers (LVM/loopback drivers/Xen) are caching stuff and > > > ignoring IOZone > > > when it tries to write synchronously. Regardless, it seems like > > > file-based filesystems are the way to go. Too bad, I > prefer LVMs... > > > > Yes, you''ll probably get file-caching on Dom0 when using file-based > > setup, which doesn''t happen on other setups. > > Absolutely. Hence the over-high readings for file transfers. > > > > > The following would be interesting to also test: > > 1. Test with noticably larger test-area (say 10GB or so). > > You need to run with test files that are at least 5x the > available cache > memory before you can start to trust the results. Given that > you have 2GB > of memory on Dom0, 10GB would be the smallest test that makes sense.I didn''t actually look at the numbers, I just picked a "much larger number" - lucky guess, I suppose... ;-)> > I would be very interested to see the results from such a test. > > As one final question, what is the scheduling configuration > for Dom0 and > DomU with these tests? Have you tried different configurations > (period/slice) for the DomU tests to see if it makes any difference?Ah, scheduler for all intents and purposes SHOULD be credit scheduler, as that will give the best possible balance between the domains, rather than any of the older schedulers that can''t for example move a domain from one CPU to another. -- Mats> > Best regards, > > Roger > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Alex Iribarren
2006-Aug-24 15:32 UTC
Re: [Xen-users] Differences in performance between file and LVM based images
Petersson, Mats wrote:> 1. Test with noticably larger test-area (say 10GB or so).Will do, I didn''t do this before because of lack of space.> 2. Test multiple domains simultaneously to see if file-based approach is > still the fastest in this approach.I don''t think I''ll be able to do this, I don''t have enough space. I also don''t have time to make space, I''m off on holiday tomorrow. :) Maybe someone else can try this out?> 3. Test the new (unstable) Blktap model.Same "no time" excuse, although I would like to try xen unstable. It will have to wait until I get back and have no real work to do. Cheers, Alex _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Petersson, Mats
2006-Aug-24 16:03 UTC
RE: [Xen-users] Differences in performance between file and LVM based images
[.. Snip ..]> > 3. Test the new (unstable) Blktap model. > > Blktap? care to expand on this?Blktap is a user-mode only version of the "backend" for block devices, which is capable of using asynchronous IO primitives, and it''s supposed to have better performance than the old style block device driver pair, as well as improved reliability (which is why it was invented). (unstable) doesn''t mean that the blktap is unstable, but rather that it''s available in the unstable version of Xen, of course. -- Mats> > -- > Javier >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Andrew Warfield
2006-Aug-24 16:51 UTC
Re: [Xen-users] Differences in performance between file and LVM based images
Hi Alex, The reason that you are getting very fast throughput from file backends is that the loopback driver buffers in the the page cache and will acknowledge writes as complete to domU before they actually hit the disk. This is obviously unsafe, given that the guest ends up believing that the disk is in a different state than it really is. A second issue with the loopback driver is that it combines poorly with NFS, and can lead to a situation under heavy write loading in which most of dom0''s memory becoms full of dirty pages and the Linux OOM killer goes berserk and starts killing off random processes. I seem to remember someone saying that they were looking into the loopback safety issues, but I haven''t heard anything directly with regard to this in a while -- the heavy interactions with the Linux virtual memory system make this a bit of a challenging one to sort out. ;) You might want to take a look at the blktap driver code in the unstable tree. It''s basically a userspace implementation of the block backend driver and so allows file access to be made from a location that the kernel expects them to come from -- above the VFS interface and associated with a running process. It probably won''t be as fast as the loopback results that you are seeing, but should be reasonably high-performance and safe. If you turn up any bugs, we''re happy to sort them out for you -- the testing would be very welcome. Thanks! a. On 8/24/06, Alex Iribarren <Alex.Iribarren@cern.ch> wrote:> Hi all, > > Nobody seems to want to do these benchmarks, so I went ahead and did > them myself. The results were pretty surprising, so keep reading. :) > > -- Setup -- > Hardware: 2x 3GHz Intel Woodcrest (dual core), Intel S5000PAL, 1x SATA > Western Digital WD1600YD-01N 160GB, 8GB RAM (dom0 using 2G) > Dom0 and DomU: Gentoo/x86/2006.0, gcc-3.4.6, glibc-2.3.6-r4, > 2.6.16.26-xen i686, LVM compiled as a module > IOZone version: 3.242 > Contents of VM config file: > name = "gentoo"; > memory = 1024; > vcpus = 4; > > kernel = "/boot/vmlinuz-2.6.16.26-xenU"; > builder = "linux"; > > disk = [ ''phy:/dev/xenfs/gentoo,sda1,w'', ''phy:/dev/xenfs/test,sdb,w'', > ''file:/mnt/floppy/testdisk,sdc,w'' ]; > root = "/dev/sda1 rw"; > > #vif = [ ''mac=aa:00:3e:8a:00:61'' ]; > vif = [ ''mac=aa:00:3e:8a:00:61, bridge=xenbr0'' ]; > dhcp = "dhcp"; > > > -- Procedure -- > I created a partition, an LVM volume and a file, all of aprox. 1GB, and > I created ext3 filesystems on them with the default settings. I then ran > IOZone from dom0 on all three "devices" to get the reference values. I > booted my domU with the LVM and file exported and reran IOZone. All > filesystems were recreated before running the benchmark. Dom0 was idle > while domU was running the benchmark, and there were no VMs running > while I ran the benchmark on dom0. > > IOZone was run with the following command line: > iozone -KoMe -s900m -r256k -i0 -i1 -i2 -f <file to test> > This basically means that we want to run the test on a 900MB file using > 256k as the record size. We want to test sequential write and rewrite > (-i0), sequential read and reread (-i1) and random write and read (-i2). > We want to get some random accesses (-K) during testing to make this a > bit more real-life. Also, we want to use synchronous writes (-o) and > take into account buffer flushes (-M). > > -- Results -- > The first three entries (* control) are the results for the benchmark > from dom0, so they give an idea of expected "native" performance (Part. > control) and the performance of using LVM or loopback devices. The last > two entries are the results as seen from within the domU. > > "Device" Write Rewrite Read Reread > dom0 Part. 32.80 MB/s 35.92 MB/s 2010.32 MB/s 2026.11 MB/s > dom0 LVM 43.42 MB/s 51.64 MB/s 2008.92 MB/s 2039.40 MB/s > dom0 File 55.25 MB/s 65.20 MB/s 2059.91 MB/s 2052.45 MB/s > domU Part. 31.29 MB/s 34.85 MB/s 2676.16 MB/s 2751.57 MB/s > domU LVM 40.97 MB/s 47.65 MB/s 2645.21 MB/s 2716.70 MB/s > domU File 241.24 MB/s 43.58 MB/s 2603.91 MB/s 2684.58 MB/s > > "Device" Random read Random write > dom0 Part. 2013.73 MB/s 26.73 MB/s > dom0 LVM 2011.68 MB/s 32.90 MB/s > dom0 File 2049.71 MB/s 192.97 MB/s > domU Part. 2723.65 MB/s 25.65 MB/s > domU LVM 2686.48 MB/s 30.69 MB/s > domU File 2662.49 MB/s 51.13 MB/s > > According to these numbers, file-based filesystems are generally the > fastest of the three alternatives. I''m having a hard time understanding > how this can possibly be true, so I''ll let the more knowledgeable > members of the mailing list enlighten us. My guess is that the extra > layers (LVM/loopback drivers/Xen) are caching stuff and ignoring IOZone > when it tries to write synchronously. Regardless, it seems like > file-based filesystems are the way to go. Too bad, I prefer LVMs... > > Cheers, > Alex > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christoph Purrucker
2006-Aug-24 20:01 UTC
Re: [Xen-users] Differences in performance between file and LVM based images
Hi Alex,> Nobody seems to want to do these benchmarks, so I went ahead and did > them myself. The results were pretty surprising, so keep reading. :)cool you brought yourself to do the benchmark. But this test was a bit useless: - Dom0 has 2GB of RAM and DomU has 1GB of RAM both not running any heavy application. And you test with 900MB of data. It''s clear you are testing caching performance but not Disk I/O since all the memory is available for caching. - You have overwhelming free CPU resources. 4 fast cores and nothing to do. So you can''t test, if LVM has lower or higher overhead than loop back filing since you did not post a sum of CPU cycles consumed by both kernels. Conclusion: - If you make a 10GB test, all three tests will show nearly the same performance since there is so many free CPU time which even out any differences. - Normally a Dom0 has nearly no free memory since Dom0 normally does nothing but manage the DomUs. All free memory is for the DomUs to do their work well. So please make a test with Dom0 memory=64MB on a single CPU environment running two DomUs (one for I/O-benchmarking and another for running ''cpuburn''). - Since loop back files are obviously beeing fully cached by Dom0, you can''t use them in productive environment, as Andrew stated, even they were faster. For example, a mailserver running in DomU has to be sure that a mail is on disk before returning the remote SMTP server an OK. But in case above the file is still in Dom0 disk cache which is bad if the system crashes. Same with databases etc. Tell me, if I''m wrong. cu cp _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jonathan Dill
2006-Aug-24 21:36 UTC
Re: [Xen-users] Differences in performance between file and LVM based images
Christoph Purrucker wrote:> - Since loop back files are obviously beeing fully cached by Dom0, you > can''t use them in productive environment, as Andrew stated, even they > were faster. For example, a mailserver running in DomU has to be sure > that a mail is on disk before returning the remote SMTP server an OK. > But in case above the file is still in Dom0 disk cache which is bad if > the system crashes. Same with databases etc.It might be interesting to use a multiple disk configuration with most of the basic OS in loopback, and any dynamic data in lvm. A simple split would be to put /var and /home on lvms, although some naughty apps might change stuff in /usr or /usr/local tree. This approach might also be convenient to copy the loopback image for multiple domUs but have different dynamic data in each. There has been some previous discussion about using a "read-only" image then have every domU do an overlay ala UnionFS, on a loopback I''d think that could really take advantage of the caching in the dom0. http://lists.xensource.com/archives/html/xen-users/2005-05/msg00463.html However, the question is if this approach really buys you enough extra performance to be worth the added complexity, and especially when you think in terms of disaster recovery, which approach would give you the least headaches. I found some other benchmarks that aren''t exactly what you were talking about, but seem to be a good example of the procedure to follow, 128 mb allocated to dom0, 128 mb allocated to each domU, tests run with 1, 4, 10, and 20 active domU, each performing the same task. These tests compared using NetBSD as the dom0, Linux as the dom0 with domU on loopback on ext3 filesystem, Linux as the dom0 with domU on LVM. In this test, the domU on NetBSD were slower, but on Linux there was not much difference between loopback or LVM. http://users.piuha.net/martti/comp/xendom0/xendom0.html Also, those tests were done on Xen 2.0.6, it might be worth repeating with a newer version of Xen. -- Jonathan Dill - The NERDS Group Network Engineering & Resource Development Specialists, LLC Cell: (240) 994-0012 Main: (301) 622-7995 Web: http://www.nerds.net _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Alex Iribarren
2006-Aug-25 08:17 UTC
Re: [Xen-users] Differences in performance between file and LVM based images
Hi Christoph, Christoph Purrucker wrote:> cool you brought yourself to do the benchmark. But this test was a bit > useless::) I knew I was going to get that. Don''t be so quick to dismiss this benchmark as useless just because it doesn''t match your use-case. The original email that prompted me to do this asked about performance penalty of using files vs. LVM volumes, but it didn''t give any more conditions. I chose 900MB files because it is close to how I would use Xen (where this more than an experiment), and because it was the biggest size I could get immediately (I don''t have full control over this machine, nor time to invest in this). I''m aware of the fact that 900MB files will be cached, specially on a machine as powerful and idle as this. However, I was trying to measure the performance I could "realistically" get, not the overhead of LVM vs. files vs. direct access. Having said that, I invite you (everybody, not just you) to continue with these kind of benchmarks. If you test the actual overhead of each system and manage to eliminate all the caching, I''d be interested in seeing your results. Cheers, Alex _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Adrian Chadd
2006-Aug-25 08:22 UTC
Re: [Xen-users] Differences in performance between file and LVM based images
On Fri, Aug 25, 2006, Alex Iribarren wrote:> I''m aware of the fact that 900MB files will be cached, specially on a > machine as powerful and idle as this. However, I was trying to measure > the performance I could "realistically" get, not the overhead of LVM vs. > files vs. direct access. > > Having said that, I invite you (everybody, not just you) to continue > with these kind of benchmarks. If you test the actual overhead of each > system and manage to eliminate all the caching, I''d be interested in > seeing your results.Hm! Does going via the loop device invoke O_DIRECT semantics? Does it end up caching the FS data in dom0 (and then double-caching it again in the domU) ? Adrian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christoph Purrucker
2006-Aug-26 20:43 UTC
Re: [Xen-users] Differences in performance between file and LVM based images
Hi Alex,> Having said that, I invite you (everybody, not just you) to continue > with these kind of benchmarks. If you test the actual overhead of each > system and manage to eliminate all the caching, I''d be interested in > seeing your results.:) I knew I was going to get that. *g* Hardware: Athlon XP 1600+, 512 MB RAM, one single and old ATA-Disk Software: Debian Etch, 2.6.16-2-xen-k7 both on Dom0 and 2.6.16-2-k7 on DomU (Debian-Standard-Kernels out of the box) Dom0 Memory: 256MB total, 5MB free, 60MB cache DomU Memory: 64MB total, 25MB free + 25MB cache (it''s a clean image) ==== Dom0 speed on physical harddisk === Dom0# iozone -KoMe -s900m -r256k -i0 -i1 -i2 -f /var/tmp/iotest random random KB reclen write rewrite read reread read write 921600 256 11854 18220 23382 25178 10965 11992 ==== DomU speed on loopback file === DomU# iozone -KoMe -s900m -r256k -i0 -i1 -i2 -f /var/tmp/iotest random random KB reclen write rewrite read reread read write 921600 256 11478 14609 15998 18539 7280 14882 The test costed 52 CPU seconds on Dom0 and 42 CPU seconds on DomU (according to xentop). ==== DomU speed on LVM volume === DomU# iozone -KoMe -s900m -r256k -i0 -i1 -i2 -f /mnt/tmp/iotest random random KB reclen write rewrite read reread read write 921600 256 10382 15908 18942 18639 7267 8815 This test costed 16 CPU seconds on Dom0 and 39 CPU seconds on DomU (according to xentop). Please note that the LVM PV is at the slow end of the harddisk, the loopback file is somewhere in the first half of the disk. I made the last two tests twice with no striking differences. What''s the conclusion now? - More tests needed! - I think 16 seconds to 52 seconds on Dom0 is a remarkable difference. cu cp _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christoph Purrucker
2006-Aug-26 21:50 UTC
Re: [Xen-users] Differences in performance between file and LVM based images
Hi Tom, I''ll CC this back to xen-users.> nice job. Only thing which I think is missing are the numbers that would > allow one to make guesses at percentages of CPU used... yes 16 to 52 > seconds of CPU time is significant, and _very significant_ if the test > completed in 2 or 3 minutes... but far less significant if it took 20 or > 30 minutes.It depends. The test was on a slow harddisk, so the test may take a time. But it shows the overhead of loop back files. If the disk would be 4 times faster (no problem with a disk build in this millennium), the test would be finished in 1/4 of the time. But the CPU time needed to finish the test would be the same. None the less I repeated the LVM test once more for you: Start: Sat Aug 26 21:32:49 UTC 2006 Ended: Sat Aug 26 21:40:41 UTC 2006 cu cp _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ligesh
2006-Aug-26 22:45 UTC
[Xen-users] Re: Differences in performance between file and LVM based images
Hey, I didn''t know that my question would actually provoke so much discussion. I had thought this would be a semi-settled issue. See, the whole thing sort of becomes moot when you consider the fact that nowadays people use LVM for their normal partitioning itself. So whether you use LVM or file based storage, the LVM overhead would always be there. When you use file based storage, the logic first has to go through the ext3, then through the LVM and then to the harddisk. So LVMs simply win by default. Anyway, some more benchmarks would be welcome. Of course, giving dom0 1GB memory is silly, since dom0 is not supposed to be doing any "useful" work. The aim is not even performance, but rather scalability. At least as far as I am concerned (and also the purported aim of Xen being to improve server utilization from 15% to 85%)--which is data center virtualization--the idea is to have as many domUs as possible on a single server, and this is important, especially if xen want to be comparable to container technologies like solaris zones or openvz. In fact, I would appreciate if someone could do some benchmarks on xen vz openvz or solaris zones, and see what exactly are the performance penalties and how they can be overcome, if at all. The clear overhead that I can see is the memory needed for the domU kernel, but I don''t think this is really too much, and also this can be further reduced by compiling custom domU kernels. I would also like to know if there are any other areas where Xen would _significantly_ trail in performance when compared to OS level virtualization. Thanks -- :: Lxhelp :: lxhelp.at.lxlabs.com :: http://lxlabs.com :: _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ligesh
2006-Aug-27 23:01 UTC
[Xen-users] Re: Differences in performance between file and LVM based images
There is something I forgot to add about caching. The domU kernel will cache all the relevant data in the most efficient manner (Or at least we trust the kernel hackers to do so), the Dom0 has actually no idea of the relevance of each data, and it will mostly be caching entire blocks in memory, and I think this too will add to unnecessary overheads. It is always better to trust the DomU kernel to be efficient in its RAM + Swap usage, rather than depend on the Dom0 to cache entire blocks in memory. Anyway are there any benchmarks on how many average load domUs can be fitted into a single server, say a quad opteron with 12GB ram, each with say an apache serving some particular number of hits. How does it stack up against vserver? Can we have 400 domUs in such a configuration? Some numbers about scalability would be great. Thanks. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Arjun
2006-Sep-12 21:33 UTC
Re: [Xen-users] Differences in performance between file and LVM based images.
We had done some webserver tests using file based images instead of LVM. This was on Xen 3.0.0. As I remember, we didn''t have any significant difference between the performance on Xen with file based image and Linux with physical (normal) disk. regds Arjun On 8/23/06, Ligesh <myself@ligesh.com> wrote:> > Hello folks, > > How much is the performance penalty if you use a file instead of an LVM? > Has there been any benchmarks on this? > > Thanks. > > -- > :: Ligesh :: http://ligesh.com > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ligesh
2006-Sep-13 04:45 UTC
[Xen-users] Re: Differences in performance between file and LVM based images.
Some numbers please. How many domUs? 200? 300? 1? 2? At smaller number of domUs, the data is pretty much irrelevant. The entire issue of preformance crops up only if you are having more than 50 domUs. On Tue, Sep 12, 2006 at 05:33:31PM -0400, Arjun wrote:> We had done some webserver tests using file based images instead of LVM. > This was on Xen 3.0.0. > As I remember, we didn''t have any significant difference between the > performance on Xen with file based image and Linux with physical (normal) > disk._______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Tim Post
2006-Sep-13 04:57 UTC
Re: [Xen-users] Re: Differences in performance between file and LVM based images.
Type of disk being used is also a major influence .. (sata / ide / scsi) as well as how well the kernel exporting the BD''s / VBD''s interacts with the drive controller. I''m also curious to see a full spread of data and output of lspci wrt the controller being used on dom-0 :) Thanks -Tim On Wed, 2006-09-13 at 10:15 +0530, Ligesh wrote:> Some numbers please. How many domUs? 200? 300? 1? 2? At smaller number of domUs, the data is pretty much irrelevant. The entire issue of preformance crops up only if you are having more than 50 domUs. > > On Tue, Sep 12, 2006 at 05:33:31PM -0400, Arjun wrote: > > We had done some webserver tests using file based images instead of LVM. > > This was on Xen 3.0.0. > > As I remember, we didn''t have any significant difference between the > > performance on Xen with file based image and Linux with physical (normal) > > disk. > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users