Adi Kriegisch
2011-Feb-23 13:26 UTC
[Xen-users] blk[front|back] does not hand over minimum and optimal_io_size to domU
Dear all, I investigated some serious performance drop between Dom0 and DomU with LVM on top of RAID6 and blkback devices. While I have around 130MB/s write performance in Dom0, I only get 30MB/s in DomU. Inspecting this with dstat/iostat revealed that I have a read rate of about 17-25MB/s while writing with aroung 40MB/s. The reading only occurs on the disk devices assembled to the RAID6 not the md device itself. So this is related to RAID6 activity only. The reason for this is recalculation of checksums due to a too small optimal_io_size: On Dom0: blockdev --getiomin /dev/space/test 524288 (which is chunk size) blockdev --getioopt /dev/space/test 3145728 (which is 6*chunk size) On DomU: blockdev --getiomin /dev/xvdb1 512 blockdev --getioopt /dev/xvdb1 0 (so the kernel will use 1MB by default, IIRC) minimum_io_size -- if not set -- is hardware block size which seems to be set to 512 in xlvbd_init_blk_queue (blkfront.c). Btw: blockdev --getbsz /dev/space/test gives 4096 on Dom0 while DomU reports 512. I can somehow mitigate the issue by using a way smaller chunk size but this is IMHO just working around the issue. Is this a bug or a regression? Or does this happen to anyone using RAID6 (and probably RAID5 as well) and noone noticed the drop until now? Is there any way to work around this issue? Thanks, Adi Kriegisch PS: I am using a stock Debian/Squeeze kernel on top of Debians Xen 4.0.1-2. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2011-Mar-23 15:50 UTC
Re: [Xen-users] blk[front|back] does not hand over minimum and optimal_io_size to domU
On Wed, Feb 23, 2011 at 02:26:41PM +0100, Adi Kriegisch wrote:> Dear all, > > I investigated some serious performance drop between Dom0 and DomU with > LVM on top of RAID6 and blkback devices. > While I have around 130MB/s write performance in Dom0, I only get 30MB/s in > DomU. Inspecting this with dstat/iostat revealed that I have a read rate of > about 17-25MB/s while writing with aroung 40MB/s. > The reading only occurs on the disk devices assembled to the RAID6 not the > md device itself. So this is related to RAID6 activity only. > The reason for this is recalculation of checksums due to a too small > optimal_io_size: > On Dom0: > blockdev --getiomin /dev/space/test > 524288 (which is chunk size) > blockdev --getioopt /dev/space/test > 3145728 (which is 6*chunk size) > > On DomU: > blockdev --getiomin /dev/xvdb1 > 512 > blockdev --getioopt /dev/xvdb1 > 0 (so the kernel will use 1MB by default, IIRC) > > minimum_io_size -- if not set -- is hardware block size which seems to be > set to 512 in xlvbd_init_blk_queue (blkfront.c). Btw: blockdev --getbsz > /dev/space/test gives 4096 on Dom0 while DomU reports 512. > > I can somehow mitigate the issue by using a way smaller chunk size but this > is IMHO just working around the issue. > > Is this a bug or a regression? Or does this happen to anyone using RAID6 > (and probably RAID5 as well) and noone noticed the drop until now? > Is there any way to work around this issue? > > Thanks, > Adi Kriegisch > > PS: I am using a stock Debian/Squeeze kernel on top of Debians Xen 4.0.1-2. >Hello, Did you find more info about this issue? -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Adi Kriegisch
2011-Mar-24 13:15 UTC
Re: [Xen-users] blk[front|back] does not hand over minimum and optimal_io_size to domU
Dear Pasi, I am still investigating this... (and I also wrote a bug report about it which is still waiting for an update).> > I investigated some serious performance drop between Dom0 and DomU with > > LVM on top of RAID6 and blkback devices.[SNIP]> > minimum_io_size -- if not set -- is hardware block size which seems to be > > set to 512 in xlvbd_init_blk_queue (blkfront.c). Btw: blockdev --getbsz > > /dev/space/test gives 4096 on Dom0 while DomU reports 512.I recompiled the kernel with those values hardcoded. It had no direct impact on the benchmark results. So this assumtion was wrong.> > I can somehow mitigate the issue by using a way smaller chunk size but this > > is IMHO just working around the issue.Using a smaller chunk size indeed helps to improve write speeds but read speeds are getting worse then. Making benchmarks with different chunk sizes and different kernels is quite time consuming; therefor I did not provide an update on that yet.> > Is this a bug or a regression? Or does this happen to anyone using RAID6 > > (and probably RAID5 as well) and noone noticed the drop until now?I''d be really glad if someone who is using raid5 or raid6 on Dom0 could provide some numbers on this. Probably this is related to the weak hardware I am using: This machine is an Atom D525 with 4 (hyperthreaded) cores. Maybe the issue is related to in-order/out-of-order execution or something like that?> Did you find more info about this issue?To sum it up: no, not yet! ;-) Thanks for asking, Adi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Pasi Kärkkäinen
2011-Mar-24 13:30 UTC
Re: [Xen-users] blk[front|back] does not hand over minimum and optimal_io_size to domU
On Thu, Mar 24, 2011 at 02:15:01PM +0100, Adi Kriegisch wrote:> Dear Pasi, > > I am still investigating this... (and I also wrote a bug report about it > which is still waiting for an update). >If you submitted it to xen bugzilla then it''s also worth sending an email to xen-devel .. bugs are mostly handled/discussed on xen-devel mailinglist. Thanks for the update, -- Pasi> > > I investigated some serious performance drop between Dom0 and DomU with > > > LVM on top of RAID6 and blkback devices. > [SNIP] > > > minimum_io_size -- if not set -- is hardware block size which seems to be > > > set to 512 in xlvbd_init_blk_queue (blkfront.c). Btw: blockdev --getbsz > > > /dev/space/test gives 4096 on Dom0 while DomU reports 512. > I recompiled the kernel with those values hardcoded. It had no direct > impact on the benchmark results. So this assumtion was wrong. > > > > I can somehow mitigate the issue by using a way smaller chunk size but this > > > is IMHO just working around the issue. > Using a smaller chunk size indeed helps to improve write speeds but read > speeds are getting worse then. > Making benchmarks with different chunk sizes and different kernels is quite > time consuming; therefor I did not provide an update on that yet. > > > > Is this a bug or a regression? Or does this happen to anyone using RAID6 > > > (and probably RAID5 as well) and noone noticed the drop until now? > I''d be really glad if someone who is using raid5 or raid6 on Dom0 could > provide some numbers on this. > > Probably this is related to the weak hardware I am using: This machine is > an Atom D525 with 4 (hyperthreaded) cores. Maybe the issue is related to > in-order/out-of-order execution or something like that? > > > Did you find more info about this issue? > To sum it up: no, not yet! ;-) > > Thanks for asking, > Adi_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users