Looking in the dmesg for a domU, i saw the following: io scheduler noop registered io scheduler anticipatory registered io scheduler deadline registered io scheduler cfq registered Reading docs i found it was about the Linux IO Scheduler. cat /sys/block/sda1/queue/scheduler [noop] anticipatory deadline cfq Now using cfq for scheduler echo cfq > /sys/block/sda1/queue/scheduler cat /sys/block/sda1/queue/scheduler noop anticipatory deadline [cfq] Now there is a log entry in /var/log/messages about cfq, it says Jan 9 05:39:28 host6 kernel: cfq: depth 4 reached, tagging now on The same message is also displayed on the console of domU. The scheduler docs say that using elevator= option with kernel a scheduler can be specified at the command line, however if i try to do that in domU, it doesn''t work. My dom0 has anticipatory as the scheduler which is the default IO scheduler. Any one has any ideas or worked with the io scheduler ? -- regards, Anand _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
Anand wrote:> Looking in the dmesg for a domU, i saw the following: > > io scheduler noop registered > io scheduler anticipatory registered > io scheduler deadline registered > io scheduler cfq registered > > Reading docs i found it was about the Linux IO Scheduler. > > cat /sys/block/sda1/queue/scheduler > > [noop] anticipatory deadline cfq > > Now using cfq for scheduler > > echo cfq > /sys/block/sda1/queue/scheduler > > cat /sys/block/sda1/queue/scheduler > > noop anticipatory deadline [cfq] > > Now there is a log entry in /var/log/messages about cfq, it says > > Jan 9 05:39:28 host6 kernel: cfq: depth 4 reached, tagging now on > > The same message is also displayed on the console of domU. > > The scheduler docs say that using elevator= option with kernel a > scheduler can be specified at the command line, however if i try to do > that in domU, it doesn''t work. > > My dom0 has anticipatory as the scheduler which is the default IO scheduler. > > Any one has any ideas or worked with the io scheduler ?On non-xen, yes. There''s some benchmarks somewhere (don''t seem to find them right now) about advanteges of the various ones. /usr/src/linux/Documentation/kernel-parameters.txt says a few words. As you''ve seen for yourself, the scheduler can be changed runtime, and likewise as a boot option. My understanding is that schedulers works fairly directly on hardware, at least reorganizing disk queues, I -think- you cannot have different schedulers for dom0 and domU''s. Why would you? It''s the same physical disk... I believe cfq is usually considered the most versatile, unless specific needs are determined. -- Kind regards, Mogens Valentin _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
Cool. I didn''t know some of this stuff existed... e.g. Documentation/block/ioprio.txt Can anyone tell what (if any) knowledge XEN has of this? Can it pass I/O priorities from a domU to the dom0, or can we assign I/O priorities to a whole domU ? On Mon, 9 Jan 2006, Mogens Valentin wrote: <snip>> > On non-xen, yes. There''s some benchmarks somewhere (don''t seem to find > them right now) about advanteges of the various ones. > /usr/src/linux/Documentation/kernel-parameters.txt says a few words. > > As you''ve seen for yourself, the scheduler can be changed runtime, and > likewise as a boot option. > My understanding is that schedulers works fairly directly on hardware, > at least reorganizing disk queues, I -think- you cannot have different > schedulers for dom0 and domU''s. > Why would you? It''s the same physical disk... > I believe cfq is usually considered the most versatile, unless specific > needs are determined.Looks that way... from my 5 minutes of reading... -Tom ---------------------------------------------------------------------- tbrown@BareMetal.com | What I like about deadlines is the lovely BareMetal.com | whooshing they make as they rush past. web hosting since ''95 | - Douglas Adams _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
> Cool. I didn''t know some of this stuff existed... e.g. > Documentation/block/ioprio.txtYeah, it''s pretty funky stuff :-)> Can anyone tell what (if any) knowledge XEN has of this? Can it pass I/O > priorities from a domU to the dom0, or can we assign I/O priorities to a > whole domU ?IIRC, there''s some work going on to allow the second of those points, think there''s a patch circulating. Re. passing priorities through... you get to do that implicitly, by the order in which domU requests come through. But this obviously doesn''t give a priority relative to domUs... You could implement something like that but you''d have to trust all your domUs not to starve each other and dom0 itself, so it''d probably not be so widely applicable. Definitely handy that this stuff is in the kernel though - IIRC, the pluggable scheduler stuff went in last year, so it''s quite recent. Cheers, Mark> > On Mon, 9 Jan 2006, Mogens Valentin wrote: > > <snip> > > > On non-xen, yes. There''s some benchmarks somewhere (don''t seem to find > > them right now) about advanteges of the various ones. > > /usr/src/linux/Documentation/kernel-parameters.txt says a few words. > > > > As you''ve seen for yourself, the scheduler can be changed runtime, and > > likewise as a boot option. > > My understanding is that schedulers works fairly directly on hardware, > > at least reorganizing disk queues, I -think- you cannot have different > > schedulers for dom0 and domU''s. > > Why would you? It''s the same physical disk... > > I believe cfq is usually considered the most versatile, unless specific > > needs are determined. > > Looks that way... from my 5 minutes of reading... > > -Tom > > ---------------------------------------------------------------------- > tbrown@BareMetal.com | What I like about deadlines is the lovely > BareMetal.com | whooshing they make as they rush past. > web hosting since ''95 | - Douglas Adams > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
>As you''ve seen for yourself, the scheduler can be changed runtime, and >likewise as a boot option.Yes the option is elevator=scheduler_name. However i tried that out with the domU, it doesn''t work. The default scheduler is noop.>My understanding is that schedulers works fairly directly on hardware, >at least reorganizing disk queues, I -think- you cannot have different >schedulers for dom0 and domU''s.>Why would you? It''s the same physical disk... >I believe cfq is usually considered the most versatile, unless specific >needs are determined.Well thats what i am confused with. On my machine, the dom0 is running anticipatory as default and domU is running noop. Any ideas ? On non-xen, yes. There''s some benchmarks somewhere (don''t seem to find> them right now) about advanteges of the various ones. > /usr/src/linux/Documentation/kernel-parameters.txt says a few words. > > As you''ve seen for yourself, the scheduler can be changed runtime, and > likewise as a boot option. > My understanding is that schedulers works fairly directly on hardware, > at least reorganizing disk queues, I -think- you cannot have different > schedulers for dom0 and domU''s. > Why would you? It''s the same physical disk... > I believe cfq is usually considered the most versatile, unless specific > needs are determined. > > -- > Kind regards, > Mogens Valentin > >-- regards, Anand _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
I don''t see a ioprio.txt inside linux-2.6.12-xen0/Documentation/block. The only docs available there are as-iosched.txt biodoc.txt deadline-iosched.txt request.txt On 1/10/06, Mark Williamson <mark.williamson@cl.cam.ac.uk> wrote:> > > Cool. I didn''t know some of this stuff existed... e.g. > > Documentation/block/ioprio.txt > > Yeah, it''s pretty funky stuff :-) > > > Can anyone tell what (if any) knowledge XEN has of this? Can it pass I/O > > priorities from a domU to the dom0, or can we assign I/O priorities to a > > whole domU ? > > IIRC, there''s some work going on to allow the second of those points, > think > there''s a patch circulating. > > Re. passing priorities through... you get to do that implicitly, by the > order > in which domU requests come through. But this obviously doesn''t give a > priority relative to domUs... You could implement something like that but > you''d have to trust all your domUs not to starve each other and dom0 > itself, > so it''d probably not be so widely applicable. > > Definitely handy that this stuff is in the kernel though - IIRC, the > pluggable > scheduler stuff went in last year, so it''s quite recent. > > Cheers, > Mark > > > > > > On Mon, 9 Jan 2006, Mogens Valentin wrote: > > > > <snip> > > > > > On non-xen, yes. There''s some benchmarks somewhere (don''t seem to find > > > them right now) about advanteges of the various ones. > > > /usr/src/linux/Documentation/kernel-parameters.txt says a few words. > > > > > > As you''ve seen for yourself, the scheduler can be changed runtime, and > > > likewise as a boot option. > > > My understanding is that schedulers works fairly directly on hardware, > > > at least reorganizing disk queues, I -think- you cannot have different > > > schedulers for dom0 and domU''s. > > > Why would you? It''s the same physical disk... > > > I believe cfq is usually considered the most versatile, unless > specific > > > needs are determined. > > > > Looks that way... from my 5 minutes of reading... >-- regards, Anand _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
On 1/10/06, Mogens Valentin <mogensv@vip.cybercity.dk> wrote:> > On non-xen, yes. There''s some benchmarks somewhere (don''t seem to find > them right now) about advanteges of the various ones. > /usr/src/linux/Documentation/kernel-parameters.txt says a few words. >>From the above kernel-parameters.txtelevator= [IOSCHED] Format: {"as"|"cfq"|"deadline"|"noop"} See Documentation/block/as-iosched.txt and Documentation/block/deadline-iosched.txt for details. I tried to pass elevator=cfq on the dom0, it worked out. On domU however i am passing the same parameter and the scheduler is still noop. -bash-3.00# cat /sys/block/sda1/queue/scheduler [noop] anticipatory deadline cfq Any ideas why its using 2 different schedulers ? -- regards, Anand _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
On Tue, 10 Jan 2006, Anand wrote:> I don''t see a ioprio.txt inside linux-2.6.12-xen0/Documentation/block. The > only docs available there are > > as-iosched.txt biodoc.txt deadline-iosched.txt request.txtyes, apparently I was looking a 2.6.14 kernel tree... _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
As per changeset 8291: 6f62ad959f6b<xenbits.xensource.com/xen-unstable.hg?cmd=changeset;node=6f62ad959f6b6d7fd98b2adfb3f07a9a15613e07>in xen-unstable, xen now has CFQ scheduling. Has anyone tested it out ? -- regards, Anand _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
On 1/16/06, Anand <xen.mails@gmail.com> wrote:> > As per changeset 8291: 6f62ad959f6b<xenbits.xensource.com/xen-unstable.hg?cmd=changeset;node=6f62ad959f6b6d7fd98b2adfb3f07a9a15613e07>in xen-unstable, xen now has > CFQ scheduling. Has anyone tested it out ? >*bump* -- regards, Anand _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
Here is the information related to cfq xenbits.xensource.com/xen-unstable.hg?cmd=changeset;node=6f62ad959f6b6d7fd98b2adfb3f07a9a15613e07 Any one has any ideas when this will make in to the testing ?? -- regards, Anand _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
Sorry for bothering the list so much on this matter. Just had a thought, since those changes are in the kernel files it, can''t we just compile the kernel off the unstable and then use that for domU. This way disk io features would be available to in any version. However can any developer or someone can confirm if this is ok ? And how do we verify if the IO scheduler working ? from the host ? inside dom U ? -- regards, Anand _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
Anand wrote:> Just had a thought, since those changes are in the kernel files it, > can''t we just compile the kernel off the unstable and then use that for > domU. This way disk io features would be available to in any version.No, it''s in the backend driver, thus you have to replace the dom0 kernel. No protocol changes, so you can just drop-in the new kernel and everything should work.> However can any developer or someone can confirm if this is ok ? And how > do we verify if the IO scheduler working ? from the host ? inside dom U ?With the new dom0 kernel you''ll have one kernel thread per virtual block device. This helps the I/O scheduler (which uses the PID to decide how to group/queue requests) to do a better job. It also allows you to tweak the priorities using ionice (comes with recent util-linux versions). cheers, Gerd -- Gerd ''just married'' Hoffmann <kraxel@suse.de> I''m the hacker formerly known as Gerd Knorr. suse.de/~kraxel/just-married.jpeg _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
Gerd, Just an FYI I''m tracking -unstable these days due to problems with xend hanging with too many requests to it... side note, because of this I''m using the CFQ scheduler and no more complaints from users regarding I/O wait and no panics related to CFQ (actually no panics at all, now we just need to ditch xend!). Gerd Hoffmann wrote:> Anand wrote: > >> Just had a thought, since those changes are in the kernel files it, >> can''t we just compile the kernel off the unstable and then use that for >> domU. This way disk io features would be available to in any version. > > No, it''s in the backend driver, thus you have to replace the dom0 > kernel. No protocol changes, so you can just drop-in the new kernel and > everything should work. > >> However can any developer or someone can confirm if this is ok ? And how >> do we verify if the IO scheduler working ? from the host ? inside dom U ? > > With the new dom0 kernel you''ll have one kernel thread per virtual block > device. This helps the I/O scheduler (which uses the PID to decide how > to group/queue requests) to do a better job. It also allows you to > tweak the priorities using ionice (comes with recent util-linux versions). > > cheers, > > Gerd >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users
On 1/19/06, Gerd Hoffmann <kraxel@suse.de> wrote:> > No, it''s in the backend driver, thus you have to replace the dom0 > kernel. No protocol changes, so you can just drop-in the new kernel and > everything should work.Actually thats what i meant through my post. Anyways i was mixed up with too many things when i posted that so the post could have sounded confusing ;) With the new dom0 kernel you''ll have one kernel thread per virtual block> device. This helps the I/O scheduler (which uses the PID to decide how > to group/queue requests) to do a better job. It also allows you to > tweak the priorities using ionice (comes with recent util-linux versions). >Any ways i can actually check and see how this is happening and how to verify if its working properly ? I am working only on a devel box so putting actual load inside domU''s is not possible right now. -- regards, Anand _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com lists.xensource.com/xen-users