I am using XenServer6, and I need to get performance equal or close to what we get on the hardware, especially with large files. Are there any hints host side or guest side to increase performance. What should I look at? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
that depends - what''s your current bottleneck (cpu/disk/network)? On Wed, Dec 14, 2011 at 2:26 PM, Andrew Wells <agwells0714@gmail.com> wrote:> I am using XenServer6, and I need to get performance equal or close to > what we get on the hardware, especially with large files. > > Are there any hints host side or guest side to increase performance. What > should I look at? > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
what does the config for the domU look like - are you giving it a virtual disk image stored on a local disk, using a SAN, giving it a local disk directly, etc. On Wed, Dec 14, 2011 at 2:40 PM, Andrew Wells <agwells0714@gmail.com> wrote:> currently I am just checking and testing write performance to the disk > locally on the guest > > > On Wed, Dec 14, 2011 at 2:33 PM, John Sherwood <jrs@vt.edu> wrote: > >> that depends - what''s your current bottleneck (cpu/disk/network)? >> >> On Wed, Dec 14, 2011 at 2:26 PM, Andrew Wells <agwells0714@gmail.com>wrote: >> >>> I am using XenServer6, and I need to get performance equal or close to >>> what we get on the hardware, especially with large files. >>> >>> Are there any hints host side or guest side to increase performance. >>> What should I look at? >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >>> >>> >> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Use "reply all" to make sure you hit the mailing list. The main penalty you''d hit is that if you''re sharing that RAID device with any other guests (including the dom0), that can demolish your I/O speeds as it can terribly screw up what would be sequential reads. What are your benchmarks currently showing for native vs VM performance? On Wed, Dec 14, 2011 at 2:49 PM, Andrew Wells <agwells0714@gmail.com> wrote:> it is using virtual disk (i believe a lvm partition) on local storage; > local storage is over 4 or 5 raid5 devices. > > The ram on the domu is 45GB > > The disk I am testing is 1TB > > On Wed, Dec 14, 2011 at 2:46 PM, John Sherwood <jrs@vt.edu> wrote: > >> what does the config for the domU look like - are you giving it a >> virtual disk image stored on a local disk, using a SAN, giving it a local >> disk directly, etc. >> >> >> On Wed, Dec 14, 2011 at 2:40 PM, Andrew Wells <agwells0714@gmail.com>wrote: >> >>> currently I am just checking and testing write performance to the disk >>> locally on the guest >>> >>> >>> On Wed, Dec 14, 2011 at 2:33 PM, John Sherwood <jrs@vt.edu> wrote: >>> >>>> that depends - what''s your current bottleneck (cpu/disk/network)? >>>> >>>> On Wed, Dec 14, 2011 at 2:26 PM, Andrew Wells <agwells0714@gmail.com>wrote: >>>> >>>>> I am using XenServer6, and I need to get performance equal or close to >>>>> what we get on the hardware, especially with large files. >>>>> >>>>> Are there any hints host side or guest side to increase performance. >>>>> What should I look at? >>>>> >>>>> _______________________________________________ >>>>> Xen-users mailing list >>>>> Xen-users@lists.xensource.com >>>>> http://lists.xensource.com/xen-users >>>>> >>>>> >>>> >>> >> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hardware: dd if=/dev/zero of=/data/gp/test.file bs=4096 count=1000000 4096000000 bytes (4.1 GB) copied, 4.69611 seconds, 872 MB/s overwrite the same file: 4096000000 bytes (4.1 GB) copied, 14.5652 seconds, 281 MB/s Virtual: dd if=/dev/zero of=/data/gp/test.file bs=4096 count=1000000 4096000000 bytes (4.1 GB) copied, 5.6555 seconds, 724 MB/s overwrite the same file: 4096000000 bytes (4.1 GB) copied, 6.49021 seconds, 631 MB/s *-----------------* Hardware: dd if=/dev/zero of=/data/vol2/test.file bs=4096 count=10000000 40960000000 bytes (41 GB) copied, 46.1366 seconds, 888 MB/s Virtual: dd if=/dev/zero of=/data/vol2/test.file bs=4096 count=10000000 23997329408 bytes (24 GB) copied, 118.243 seconds, 203 MB/s (*canceled early to get speed*) I need to get the writes to be more consistent. On Wed, Dec 14, 2011 at 2:58 PM, John Sherwood <jrs@vt.edu> wrote:> Use "reply all" to make sure you hit the mailing list. > > The main penalty you''d hit is that if you''re sharing that RAID device with > any other guests (including the dom0), that can demolish your I/O speeds as > it can terribly screw up what would be sequential reads. What are your > benchmarks currently showing for native vs VM performance? > > > On Wed, Dec 14, 2011 at 2:49 PM, Andrew Wells <agwells0714@gmail.com>wrote: > >> it is using virtual disk (i believe a lvm partition) on local storage; >> local storage is over 4 or 5 raid5 devices. >> >> The ram on the domu is 45GB >> >> The disk I am testing is 1TB >> >> On Wed, Dec 14, 2011 at 2:46 PM, John Sherwood <jrs@vt.edu> wrote: >> >>> what does the config for the domU look like - are you giving it a >>> virtual disk image stored on a local disk, using a SAN, giving it a local >>> disk directly, etc. >>> >>> >>> On Wed, Dec 14, 2011 at 2:40 PM, Andrew Wells <agwells0714@gmail.com>wrote: >>> >>>> currently I am just checking and testing write performance to the disk >>>> locally on the guest >>>> >>>> >>>> On Wed, Dec 14, 2011 at 2:33 PM, John Sherwood <jrs@vt.edu> wrote: >>>> >>>>> that depends - what''s your current bottleneck (cpu/disk/network)? >>>>> >>>>> On Wed, Dec 14, 2011 at 2:26 PM, Andrew Wells <agwells0714@gmail.com>wrote: >>>>> >>>>>> I am using XenServer6, and I need to get performance equal or close >>>>>> to what we get on the hardware, especially with large files. >>>>>> >>>>>> Are there any hints host side or guest side to increase performance. >>>>>> What should I look at? >>>>>> >>>>>> _______________________________________________ >>>>>> Xen-users mailing list >>>>>> Xen-users@lists.xensource.com >>>>>> http://lists.xensource.com/xen-users >>>>>> >>>>>> >>>>> >>>> >>> >> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
are any other guests writing to the same RAID set? On Wed, Dec 14, 2011 at 3:14 PM, Andrew Wells <agwells0714@gmail.com> wrote:> Hardware: > dd if=/dev/zero of=/data/gp/test.file bs=4096 count=1000000 > > 4096000000 bytes (4.1 GB) copied, 4.69611 seconds, 872 MB/s > > overwrite the same file: > 4096000000 bytes (4.1 GB) copied, 14.5652 seconds, 281 MB/s > > Virtual: > dd if=/dev/zero of=/data/gp/test.file bs=4096 count=1000000 > > 4096000000 bytes (4.1 GB) copied, 5.6555 seconds, 724 MB/s > > overwrite the same file: > 4096000000 bytes (4.1 GB) copied, 6.49021 seconds, 631 MB/s > > *-----------------* > > Hardware: > dd if=/dev/zero of=/data/vol2/test.file bs=4096 count=10000000 > > 40960000000 bytes (41 GB) copied, 46.1366 seconds, 888 MB/s > > Virtual: > dd if=/dev/zero of=/data/vol2/test.file bs=4096 count=10000000 > > 23997329408 bytes (24 GB) copied, 118.243 seconds, 203 MB/s (*canceled > early to get speed*) > > > I need to get the writes to be more consistent. > > On Wed, Dec 14, 2011 at 2:58 PM, John Sherwood <jrs@vt.edu> wrote: > >> Use "reply all" to make sure you hit the mailing list. >> >> The main penalty you''d hit is that if you''re sharing that RAID device >> with any other guests (including the dom0), that can demolish your I/O >> speeds as it can terribly screw up what would be sequential reads. What >> are your benchmarks currently showing for native vs VM performance? >> >> >> On Wed, Dec 14, 2011 at 2:49 PM, Andrew Wells <agwells0714@gmail.com>wrote: >> >>> it is using virtual disk (i believe a lvm partition) on local storage; >>> local storage is over 4 or 5 raid5 devices. >>> >>> The ram on the domu is 45GB >>> >>> The disk I am testing is 1TB >>> >>> On Wed, Dec 14, 2011 at 2:46 PM, John Sherwood <jrs@vt.edu> wrote: >>> >>>> what does the config for the domU look like - are you giving it a >>>> virtual disk image stored on a local disk, using a SAN, giving it a local >>>> disk directly, etc. >>>> >>>> >>>> On Wed, Dec 14, 2011 at 2:40 PM, Andrew Wells <agwells0714@gmail.com>wrote: >>>> >>>>> currently I am just checking and testing write performance to the disk >>>>> locally on the guest >>>>> >>>>> >>>>> On Wed, Dec 14, 2011 at 2:33 PM, John Sherwood <jrs@vt.edu> wrote: >>>>> >>>>>> that depends - what''s your current bottleneck (cpu/disk/network)? >>>>>> >>>>>> On Wed, Dec 14, 2011 at 2:26 PM, Andrew Wells <agwells0714@gmail.com>wrote: >>>>>> >>>>>>> I am using XenServer6, and I need to get performance equal or close >>>>>>> to what we get on the hardware, especially with large files. >>>>>>> >>>>>>> Are there any hints host side or guest side to increase performance. >>>>>>> What should I look at? >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Xen-users mailing list >>>>>>> Xen-users@lists.xensource.com >>>>>>> http://lists.xensource.com/xen-users >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, 2011/12/14 Andrew Wells <agwells0714@gmail.com>:> Hardware: > dd if=/dev/zero of=/data/gp/test.file bs=4096 count=1000000It would be very helpful to start by testing sync writes using a better blocksize. Otherwise you''re just testing how fast you can fill your dom0s buffer cache. Xen will not use such unsafe writes unless you''re using a file:// device. Also, even if our Linux apps run using 4K pages, the IO speed in dd will be quite bad using that. This would be most interesting if you expect a lot of paging from the domUs. Not saying that this isn''t something worth testing, but rather first find out the full sequential speed, and then use something different from dd to test 4K random IOs. Sequential + 4K is really not going to happen a lot. so use 1) conv=fdatasync at the end of the line 2) bs=1M count=1024 Yes, the 1024 "MB" will not be enough to fill the arrays cache. But you''re looking for host IO bottlenecks, so this would be very sensible to not try to starve the array, but the host only. -- the purpose of libvirt is to provide an abstraction layer hiding all xen features added since 2006 until they were finally understood and copied by the kvm devs.
Dear Florian Heigl, * * I attempted that, and so the Virtual System: 39.4 MB/s Hardware System: 321 MB/s I think my bottle neck is on the host side... any suggestions? Andrew. On Wed, Dec 14, 2011 at 4:10 PM, Florian Heigl <florian.heigl@gmail.com>wrote:> Hi, > > 2011/12/14 Andrew Wells <agwells0714@gmail.com>: > > Hardware: > > dd if=/dev/zero of=/data/gp/test.file bs=4096 count=1000000 > > It would be very helpful to start by testing sync writes using a > better blocksize. Otherwise you''re just testing how fast you can fill > your dom0s buffer cache. Xen will not use such unsafe writes unless > you''re using a file:// device. > > Also, even if our Linux apps run using 4K pages, the IO speed in dd > will be quite bad using that. This would be most interesting if you > expect a lot of paging from the domUs. Not saying that this isn''t > something worth testing, but rather first find out the full sequential > speed, and then use something different from dd to test 4K random IOs. > Sequential + 4K is really not going to happen a lot. > > so use > 1) conv=fdatasync at the end of the line > 2) bs=1M count=1024 > > Yes, the 1024 "MB" will not be enough to fill the arrays cache. > But you''re looking for host IO bottlenecks, so this would be very > sensible to not try to starve the array, but the host only. > > > > -- > the purpose of libvirt is to provide an abstraction layer hiding all > xen features added since 2006 until they were finally understood and > copied by the kvm devs. >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, 2011/12/15 Andrew Wells <agwells0714@gmail.com>:> Dear Florian Heigl,just to make this clear: you''re the one that has to do the searching ;)> I attempted that, and so the > Virtual System: 39.4 MB/s > > Hardware System: 321 MB/s > > I think my bottle neck is on the host side... any suggestions?look for any qemu processes consuming 100% in top output on the host while you do this benchmark. If you see any, then you''re using emulated (very slow) disk drivers in the VM. I''d be very surprised if it''s something different than that. florian -- the purpose of libvirt is to provide an abstraction layer hiding all xen features added since 2006 until they were finally understood and copied by the kvm devs.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Florian Heigl <florian.heigl@gmail.com> schrieb:>> I think my bottle neck is on the host side... any suggestions? > >look for any qemu processes consuming 100% in top output on the hostjust to make shure here: qemu? Does this means you did not use paravirtualized linux guests but full virtualization instead? Full virtualized guests are by principe slower then PV guests. cheers, Niels. - -- Niels Dettenbach Syndicat IT&Internet http://www.syndicat.com -----BEGIN PGP SIGNATURE----- Version: APG v1.0.8 iIEEAREIAEEFAk7qFFc6HE5pZWxzIERldHRlbmJhY2ggKFN5bmRpY2F0IElUJklu dGVybmV0KSA8bmRAc3luZGljYXQuY29tPgAKCRBU3ERlZRyiDY/YAJ9zTA1tZl8f QFPYiMmdV2EQTJmrEACfSSHewSCWaYD0IdmXiO0inYS7Obo=L9fI -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Andrew Wells <agwells0714@gmail.com> schrieb:>Is there a way I can get pxe boot in pv? that is the only reason I am >using >HVM in xenserver 6...hmm, not shure - may be with pygrub (never worked with pygrub before). But i hardly assume that you will find ways around to solve your issue because you have a much more "intelligent" Dom0 as a base to boot your sys ß) Maybe (i assume this) others can help more in this issue here... cheers, Niels. - -- Niels Dettenbach Syndicat IT&Internet http://www.syndicat.com -----BEGIN PGP SIGNATURE----- Version: APG v1.0.8 iIEEAREIAEEFAk7qGg06HE5pZWxzIERldHRlbmJhY2ggKFN5bmRpY2F0IElUJklu dGVybmV0KSA8bmRAc3luZGljYXQuY29tPgAKCRBU3ERlZRyiDZptAJ0SmI7ycVtU ttTI47IJnUHFlylAewCfYUWJRcuej13WTOY6b3gBlCgezTY=1mur -----END PGP SIGNATURE----- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Opps, need to reply all pv is a lot better; thanks! So using PV, is there anything I can do to tune io to increase performance On Thu, Dec 15, 2011 at 11:02 AM, Niels Dettenbach (Syndicat IT&Internet) < nd@syndicat.com> wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > > > > > Andrew Wells <agwells0714@gmail.com> schrieb: > > >Is there a way I can get pxe boot in pv? that is the only reason I am > >using > >HVM in xenserver 6 > > ...hmm, not shure - may be with pygrub (never worked with pygrub before). > But i hardly assume that you will find ways around to solve your issue > because you have a much more "intelligent" Dom0 as a base to boot your sys > ß) > > Maybe (i assume this) others can help more in this issue here... > > > cheers, > > > Niels. > > > - -- > Niels Dettenbach > Syndicat IT&Internet > http://www.syndicat.com > -----BEGIN PGP SIGNATURE----- > Version: APG v1.0.8 > > iIEEAREIAEEFAk7qGg06HE5pZWxzIERldHRlbmJhY2ggKFN5bmRpY2F0IElUJklu > dGVybmV0KSA8bmRAc3luZGljYXQuY29tPgAKCRBU3ERlZRyiDZptAJ0SmI7ycVtU > ttTI47IJnUHFlylAewCfYUWJRcuej13WTOY6b3gBlCgezTY> =1mur > -----END PGP SIGNATURE----- > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
So with the the pv My writes are alright, could be better, but my reads are really slow compared to the hardware system? Any hints for tuning the pv disk io? On Thu, Dec 15, 2011 at 1:12 PM, Andrew Wells <agwells0714@gmail.com> wrote:> Opps, need to reply all > > > pv is a lot better; thanks! > > So using PV, is there anything I can do to tune io to increase performance > > On Thu, Dec 15, 2011 at 11:02 AM, Niels Dettenbach (Syndicat IT&Internet) > <nd@syndicat.com> wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA256 >> >> >> >> >> >> Andrew Wells <agwells0714@gmail.com> schrieb: >> >> >Is there a way I can get pxe boot in pv? that is the only reason I am >> >using >> >HVM in xenserver 6 >> >> ...hmm, not shure - may be with pygrub (never worked with pygrub before). >> But i hardly assume that you will find ways around to solve your issue >> because you have a much more "intelligent" Dom0 as a base to boot your sys >> ß) >> >> Maybe (i assume this) others can help more in this issue here... >> >> >> cheers, >> >> >> Niels. >> >> >> - -- >> Niels Dettenbach >> Syndicat IT&Internet >> http://www.syndicat.com >> -----BEGIN PGP SIGNATURE----- >> Version: APG v1.0.8 >> >> iIEEAREIAEEFAk7qGg06HE5pZWxzIERldHRlbmJhY2ggKFN5bmRpY2F0IElUJklu >> dGVybmV0KSA8bmRAc3luZGljYXQuY29tPgAKCRBU3ERlZRyiDZptAJ0SmI7ycVtU >> ttTI47IJnUHFlylAewCfYUWJRcuej13WTOY6b3gBlCgezTY>> =1mur >> -----END PGP SIGNATURE----- >> >> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Dec 16, 2011 at 4:27 AM, Andrew Wells <agwells0714@gmail.com> wrote:> So with the the pv > > My writes are alright, could be better, > > but my reads are really slow compared to the hardware system? > Any hints for tuning the pv disk io?How did you measure "really slow"? Are you sure it''s not the effect of cache? I suggest you try again using block device directly (without filesystem). For example, this is on my system (RHEL6 + xen-4.1.1+ kernel 3.1.0 + zfs), testing using "dd_rescue -d -b128k -B128k -m128M /path/of/block/device/tested /dev/null", testing the same block device in three scenario: - on dom0 directly: got about 22 MBps (yes, this is pretty slow. I only have two disks, mirrored). - on dom0, but with block device mapped to dom0 with "xm block-attach" (not sure what the equivalent command is in xen server): about 18 MBps - on PV domU: about 18 MBps the "-d" flag to dd_rescue opens the block device using O_DIRECT, to avoid cache skewing up the result. -- Fajar
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Andrew Wells <agwells0714@gmail.com> schrieb:>So using PV, is there anything I can do to tune io to increase >performanceCan you pls post your configuration file? This may help us further... many thanks, Niels. - -- Niels Dettenbach Syndicat IT&Internet http://www.syndicat.com -----BEGIN PGP SIGNATURE----- Version: APG v1.0.8 iIEEAREIAEEFAk7q52M6HE5pZWxzIERldHRlbmJhY2ggKFN5bmRpY2F0IElUJklu dGVybmV0KSA8bmRAc3luZGljYXQuY29tPgAKCRBU3ERlZRyiDW/zAJ9x/HKqq8vY yFeil7oNuzIDQoWoFQCfYvHnWHs1Wa3EkgSyQjkN2ZoFv+w=WNFl -----END PGP SIGNATURE-----
I am using xenserver6, so I am not sure what you mean by configuration or what you would want? Can you be more specific? On Fri, Dec 16, 2011 at 1:38 AM, Niels Dettenbach (Syndicat IT&Internet) < nd@syndicat.com> wrote:> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > > > > > > Andrew Wells <agwells0714@gmail.com> schrieb: > > >So using PV, is there anything I can do to tune io to increase > >performance > > Can you pls post your configuration file? This may help us further... > > > many thanks, > > > Niels. > - -- > Niels Dettenbach > Syndicat IT&Internet > http://www.syndicat.com > -----BEGIN PGP SIGNATURE----- > Version: APG v1.0.8 > > iIEEAREIAEEFAk7q52M6HE5pZWxzIERldHRlbmJhY2ggKFN5bmRpY2F0IElUJklu > dGVybmV0KSA8bmRAc3luZGljYXQuY29tPgAKCRBU3ERlZRyiDW/zAJ9x/HKqq8vY > yFeil7oNuzIDQoWoFQCfYvHnWHs1Wa3EkgSyQjkN2ZoFv+w> =WNFl > -----END PGP SIGNATURE----- > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am Montag, 19. Dezember 2011, 09:15:21 schrieb Andrew Wells:> I am using xenserver6, so I am not sure what you mean by configuration or > what you would want? > > Can you be more specific?...sorry, i did not use xenserver6 nor the xenserver distribution - just "plain" xen /4 - i did not know if and with which command you may can show / dump any current domains configuration as possible / usual with xen xm / xl commands. May be other can give any hint here? cheers, Niels. --- Niels Dettenbach Syndicat IT&Internet http://www.syndicat.com/ _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
If you can provide me with what you did for xen4, I might be able to work with that information Andrew On Mon, Dec 19, 2011 at 10:21 AM, Niels Dettenbach <nd@syndicat.com> wrote:> Am Montag, 19. Dezember 2011, 09:15:21 schrieb Andrew Wells: > > I am using xenserver6, so I am not sure what you mean by configuration or > > what you would want? > > > > Can you be more specific? > ...sorry, i did not use xenserver6 nor the xenserver distribution - just > "plain" xen /4 - i did not know if and with which command you may can show > / > dump any current domains configuration as possible / usual with xen xm / xl > commands. > > May be other can give any hint here? > > > cheers, > > > Niels. > > > > --- > Niels Dettenbach > Syndicat IT&Internet > http://www.syndicat.com/_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
So I was looking over the boot of a pv system, and I saw this: ide: Assuming 50MHz system bus speed for PIO modes; override with idebus=xx is it smart to change this value to gain speed to IO? Andrew PS: sorry for double Niels, I swear I pushed reply all. On Mon, Dec 19, 2011 at 9:15 AM, Andrew Wells <agwells0714@gmail.com> wrote:> I am using xenserver6, so I am not sure what you mean by configuration or > what you would want? > > Can you be more specific? > > > On Fri, Dec 16, 2011 at 1:38 AM, Niels Dettenbach (Syndicat IT&Internet) < > nd@syndicat.com> wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA256 >> >> >> >> >> >> >> Andrew Wells <agwells0714@gmail.com> schrieb: >> >> >So using PV, is there anything I can do to tune io to increase >> >performance >> >> Can you pls post your configuration file? This may help us further... >> >> >> many thanks, >> >> >> Niels. >> - -- >> Niels Dettenbach >> Syndicat IT&Internet >> http://www.syndicat.com >> -----BEGIN PGP SIGNATURE----- >> Version: APG v1.0.8 >> >> iIEEAREIAEEFAk7q52M6HE5pZWxzIERldHRlbmJhY2ggKFN5bmRpY2F0IElUJklu >> dGVybmV0KSA8bmRAc3luZGljYXQuY29tPgAKCRBU3ERlZRyiDW/zAJ9x/HKqq8vY >> yFeil7oNuzIDQoWoFQCfYvHnWHs1Wa3EkgSyQjkN2ZoFv+w>> =WNFl >> -----END PGP SIGNATURE----- >> >> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011/12/28 Andrew Wells <agwells0714@gmail.com>:> So I was looking over the boot of a pv system, and I saw this: > > ide: Assuming 50MHz system bus speed for PIO modes; override with idebus=xx > > > is it smart to change this value to gain speed to IO?That should only affect the old "IDE" device driver. The disks in a PV domU should be handled by the PV driver "xenblk", the IDE driver will just be idle and doing nothing (unless you use PCI delegation and add a real IDE Controller + Disk to the VM :) Flo -- the purpose of libvirt is to provide an abstraction layer hiding all xen features added since 2006 until they were finally understood and copied by the kvm devs.