Hi, all, I just found out that xen assigns the NOOP disk scheduler for linux guest OSes. Dom0 uses cfq scheduler (it is the linux default). Is there a reason for xen to turn off disk request merging in the quest OS by selecting a NOOP scheduler? Is it because the request optimization will be performed in dom0 or VMM ? Thansk in advance, Jia. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Any ideas? On Wed, Feb 4, 2009 at 5:24 PM, Jia Rao <rickenrao@gmail.com> wrote:> Hi, all, > > I just found out that xen assigns the NOOP disk scheduler for linux guest > OSes. Dom0 uses cfq scheduler (it is the linux default). > Is there a reason for xen to turn off disk request merging in the quest OS > by selecting a NOOP scheduler? > Is it because the request optimization will be performed in dom0 or VMM ? > > Thansk in advance, > Jia. >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra
2009-Feb-05 14:52 UTC
Re: [Xen-users] Default disk I/O scheduler in linux guest
On Wed, Feb 4, 2009 at 5:24 PM, Jia Rao <rickenrao@gmail.com> wrote:> Hi, all, > > I just found out that xen assigns the NOOP disk scheduler for linux guest > OSes. Dom0 uses cfq scheduler (it is the linux default). > Is there a reason for xen to turn off disk request merging in the quest OS > by selecting a NOOP scheduler? > Is it because the request optimization will be performed in dom0 or VMM ?it''s an appropriate default. everything that virtualizes the IO benefits from using NOOP scheduler. the point is that any (re)ordering done by the guest would be useless when the underlying layers (Dom0 in this case, a SAN block device in others) mangle the IO requests from several guests. not only you save CPU cycles by not trying to be clever on the DomU, but also pushing the requests as early as possible to the lower layer, the best optimisations can be done at that layer. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Sadique Puthen
2009-Apr-28 10:50 UTC
Re: [Xen-users] Default disk I/O scheduler in linux guest
Javier Guerra wrote:> On Wed, Feb 4, 2009 at 5:24 PM, Jia Rao <rickenrao@gmail.com> wrote: > >> Hi, all, >> >> I just found out that xen assigns the NOOP disk scheduler for linux guest >> OSes. Dom0 uses cfq scheduler (it is the linux default). >> Is there a reason for xen to turn off disk request merging in the quest OS >> by selecting a NOOP scheduler? >> Is it because the request optimization will be performed in dom0 or VMM ? >> > > it''s an appropriate default. > > everything that virtualizes the IO benefits from using NOOP scheduler. > the point is that any (re)ordering done by the guest would be useless > when the underlying layers (Dom0 in this case, a SAN block device in > others) mangle the IO requests from several guests. not only you save > CPU cycles by not trying to be clever on the DomU, but also pushing > the requests as early as possible to the lower layer, the best > optimisations can be done at that layer. >If the underlying hardware is RAID5 done in the hardware level, using cfq instead of noop in the guest gives 2x performance while testing with a simple dd command. Noop: # sync ; date ; time dd if=/dev/zero of=/tmp/test bs=1M count=500 ; date ; time sync ; date Tue Apr 28 00:03:18 IST 2009 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 10.2711 seconds, 51.0 MB/s real 0m10.395s user 0m0.004s sys 0m1.288s Tue Apr 28 00:03:29 IST 2009 real 0m43.910s user 0m0.000s sys 0m0.000s Tue Apr 28 00:04:13 IST 2009 CFQ in the guest: sync ; date ; time dd if=/dev/zero of=/tmp/test bs=1M count=500 ; date ; time sync ; date Tue Apr 28 00:02:09 IST 2009 500+0 records in 500+0 records out 524288000 bytes (524 MB) copied, 6.44671 seconds, 81.3 MB/s real 0m6.451s user 0m0.000s sys 0m1.204s Tue Apr 28 00:02:15 IST 2009 real 0m28.451s user 0m0.000s sys 0m0.000s Tue Apr 28 00:02:44 IST 2009 Is noop still the preferred choice to be the default?> >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Javier Guerra
2009-Apr-28 16:25 UTC
Re: [Xen-users] Default disk I/O scheduler in linux guest
On Tue, Apr 28, 2009 at 5:50 AM, Sadique Puthen <sputhenp@redhat.com> wrote:> If the underlying hardware is RAID5 done in the hardware level, using > cfq instead of noop in the guest gives 2x performance while testing with > a simple dd command.i''ve repeated some quick tests, (i have only an md RAID1, with LVM). bonnie didn''t report any difference, but dd writes are 35% faster with ''noop'' on the guest than CFQ. what''s interesting is that the Dom0''s CPU usage is almost half with CFQ than NOOP, so my wild guess is that in your case you''re bound by Dom0 more than DomU -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Maybe Matching Threads
- [PATCH][RFC][12+2][v3] A expanded CFQ scheduler for cgroups
- [PATCH][RFC][12+2][v3] A expanded CFQ scheduler for cgroups
- domU can not start in Xen 4.0.1-rc3-pre using tapdisk
- domU can not start in Xen 4.0.1-rc3-pre using tapdisk
- A question about stepwise procedures: step function