Sylvain Munaut
2013-Apr-23 13:33 UTC
IO speed limited by size of IO request (for RBD driver)
Hi, I was observing a pretty severe performance impact when using Xen VM with RBD (Ceph) backed storage, especially when doing large sequential access. And I think I finally found a major cause for it: even large user space requests seem to be split into small requests of 11 * 4096 bytes. ( 44k ) This is caused by : blk_queue_max_segments(rq, BLKIF_MAX_SEGMENTS_PER_REQUEST); blk_queue_max_segment_size(rq, PAGE_SIZE); What''s the impact of modifying those ? I''ve seen some justification for the BLKIF_MAX_SEGMENTS_PER_REQUEST limit, but why limit segment size to page_size ? Cheers, Sylvain
Steven Haigh
2013-Apr-23 13:41 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 23/04/2013 11:33 PM, Sylvain Munaut wrote:> Hi, > > > I was observing a pretty severe performance impact when using Xen VM > with RBD (Ceph) backed storage, especially when doing large sequential > access. > > And I think I finally found a major cause for it: even large user > space requests seem to be split into small requests of 11 * 4096 > bytes. ( 44k ) > > This is caused by : > > blk_queue_max_segments(rq, BLKIF_MAX_SEGMENTS_PER_REQUEST); > blk_queue_max_segment_size(rq, PAGE_SIZE); > > What''s the impact of modifying those ? > I''ve seen some justification for the BLKIF_MAX_SEGMENTS_PER_REQUEST > limit, but why limit segment size to page_size ?I''m seeing the same as you - see the thread in the archives over the last few weeks - subject "RE: Xen disk write slowness in kernel 3.8.x" I get ~50MB/sec max write speeds due to probably the same problem. -- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Roger Pau Monné
2013-Apr-23 14:06 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 23/04/13 15:41, Steven Haigh wrote:> On 23/04/2013 11:33 PM, Sylvain Munaut wrote: >> Hi, >> >> >> I was observing a pretty severe performance impact when using Xen VM >> with RBD (Ceph) backed storage, especially when doing large sequential >> access. >> >> And I think I finally found a major cause for it: even large user >> space requests seem to be split into small requests of 11 * 4096 >> bytes. ( 44k ) >> >> This is caused by : >> >> blk_queue_max_segments(rq, BLKIF_MAX_SEGMENTS_PER_REQUEST); >> blk_queue_max_segment_size(rq, PAGE_SIZE); >> >> What''s the impact of modifying those ? >> I''ve seen some justification for the BLKIF_MAX_SEGMENTS_PER_REQUEST >> limit, but why limit segment size to page_size ? > > I''m seeing the same as you - see the thread in the archives over the > last few weeks - subject "RE: Xen disk write slowness in kernel 3.8.x" > > I get ~50MB/sec max write speeds due to probably the same problem.When using Ceph, are you using the Linux kernel backend (blkback), Qemu or blktap? I''ve been working on expanding the number of segments that a request can hold, and the patches just went upstream for the next Linux kernel (3.10), you might want to test them, they can be found in the following git repo: git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git branch for-jens-3.10 You will need to use them in both DomU and Dom0 in order to get more segments in a request.
Sylvain Munaut
2013-Apr-23 14:15 UTC
Re: IO speed limited by size of IO request (for RBD driver)
Hi,> When using Ceph, are you using the Linux kernel backend (blkback), Qemu > or blktap?I''ve tried both : - Using the RBD kernel driver in dom0 and using phy:/dev/rbd/xxx/xxx in the vm config - Using a custom blktap driver and I''m observing the same level of performance with both. And it''s while debugging that custom blktap driver, trying to find bottle neck that I stumbled on the 44k limit.> I''ve been working on expanding the number of segments that a request can > hold, and the patches just went upstream for the next Linux kernel > (3.10), you might want to test them, they can be found in the following > git repo:Oh, indeed, that''s really interesting, I''ll give it a shot. I''m not sure how realistic it''ll be for me to update all the VM kernel to something that recent but at least I''ll confirm the problem. Cheers, Sylvain
Sylvain Munaut
2013-Apr-25 13:00 UTC
Re: IO speed limited by size of IO request (for RBD driver)
Hi,> I''ve been working on expanding the number of segments that a request can > hold, and the patches just went upstream for the next Linux kernel > (3.10), you might want to test them, they can be found in the following > git repo: > > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git > > branch for-jens-3.10 > > You will need to use them in both DomU and Dom0 in order to get more > segments in a request.I read about this change on the ML and the code and won''t that also require changes to blktap kernel driver and the userspace as well to use the improvements ? Cheers, Sylvain
Steven Haigh
2013-Apr-26 14:16 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 27/04/2013 12:06 AM, Roger Pau Monné wrote:> On 23/04/13 21:05, Steven Haigh wrote: >> Sorry - resending this to Felipe as well - as I started talking to him >> directly previously. >> >> Felipe, to bring you up to date, I''ve copied over the blkback files from >> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and >> tested. Results below: >>Bringing this into context in a nutshell - results showed about 5MB/sec improvement when using buffered disk access - totalling ~57MB/sec write speed vs ~98MB/sec when using the oflag=direct flag to dd. When talking about back porting a few indirect patches to mainline blkback (3.8.8 atm):>> On 24/04/2013 4:13 AM, Roger Pau Monné wrote: >>> I think it requires a non-trivial amount of work, what you could do as a >>> test is directly replace the affected files with the ones in my tree, it >>> is not optimal, but I don''t think it''s going to cause problems, and you >>> could at least see if indirect descriptors solve your problem. >> >> Ok, I copied across those files, built, packaged and installed them on >> my Dom0. Good news is that its a little quicker, bad news is not by much. > > Could you try increasing xen_blkif_max_segments variable in > xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only > need to recompile the DomU kernel after this, the Dom0 is able to > support up to 256 indirect segments.I''ll have to look at this. All DomU''s are Scientific Linux 6.4 systems - so essentially RHEL6.4 and so on. I haven''t built a RH kernel as yet - so I''ll have to look at what is involved. It might be as simple as rebuilding a normal SRPM.> Also, I think we should bring this conversation back to xen-devel.Agreed. -- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Steven Haigh
2013-Apr-27 01:57 UTC
Re: IO speed limited by size of IO request (for RBD driver)
Roger Pau Monné
2013-Apr-27 07:06 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 27/04/13 03:57, Steven Haigh wrote:> On 27/04/2013 12:16 AM, Steven Haigh wrote: >> On 27/04/2013 12:06 AM, Roger Pau Monné wrote: >>> On 23/04/13 21:05, Steven Haigh wrote: >>>> Sorry - resending this to Felipe as well - as I started talking to him >>>> directly previously. >>>> >>>> Felipe, to bring you up to date, I''ve copied over the blkback files from >>>> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and >>>> tested. Results below: >>>> >> >> Bringing this into context in a nutshell - results showed about 5MB/sec >> improvement when using buffered disk access - totalling ~57MB/sec write >> speed vs ~98MB/sec when using the oflag=direct flag to dd. >> >> When talking about back porting a few indirect patches to mainline >> blkback (3.8.8 atm): >>>> On 24/04/2013 4:13 AM, Roger Pau Monné wrote: >>>>> I think it requires a non-trivial amount of work, what you could do >>>>> as a >>>>> test is directly replace the affected files with the ones in my >>>>> tree, it >>>>> is not optimal, but I don''t think it''s going to cause problems, and you >>>>> could at least see if indirect descriptors solve your problem. >>>> >>>> Ok, I copied across those files, built, packaged and installed them on >>>> my Dom0. Good news is that its a little quicker, bad news is not by >>>> much. >>> >>> Could you try increasing xen_blkif_max_segments variable in >>> xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only >>> need to recompile the DomU kernel after this, the Dom0 is able to >>> support up to 256 indirect segments. >> >> I''ll have to look at this. All DomU''s are Scientific Linux 6.4 systems - >> so essentially RHEL6.4 and so on. I haven''t built a RH kernel as yet - >> so I''ll have to look at what is involved. It might be as simple as >> rebuilding a normal SRPM. > > Ok, I''ve had a look at the RH xen-blkfront.c - and I can''t see any > definition of xen_blkif_max_segments - or anything close. I''ve attached > the version used in the EL6 kernel from the kernel-2.6.32-358.6.1.el6 srpm. > > Any ideas on where to go from here?I thought you were using the 3.8.x kernel inside the DomU also, if you are not using it, then it''s normal that there''s no speed difference, you have a Dom0 kernel that supports indirect descriptors, but your DomU doesn''t. You must use a kernel that supports indirect descriptors in both Dom0 and DomU in order to make use of this feature.
Steven Haigh
2013-Apr-27 07:51 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 27/04/2013 5:06 PM, Roger Pau Monné wrote:> On 27/04/13 03:57, Steven Haigh wrote: >> On 27/04/2013 12:16 AM, Steven Haigh wrote: >>> On 27/04/2013 12:06 AM, Roger Pau Monné wrote: >>>> On 23/04/13 21:05, Steven Haigh wrote: >>>>> Sorry - resending this to Felipe as well - as I started talking to him >>>>> directly previously. >>>>> >>>>> Felipe, to bring you up to date, I''ve copied over the blkback files from >>>>> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and >>>>> tested. Results below: >>>>> >>> >>> Bringing this into context in a nutshell - results showed about 5MB/sec >>> improvement when using buffered disk access - totalling ~57MB/sec write >>> speed vs ~98MB/sec when using the oflag=direct flag to dd. >>> >>> When talking about back porting a few indirect patches to mainline >>> blkback (3.8.8 atm): >>>>> On 24/04/2013 4:13 AM, Roger Pau Monné wrote: >>>>>> I think it requires a non-trivial amount of work, what you could do >>>>>> as a >>>>>> test is directly replace the affected files with the ones in my >>>>>> tree, it >>>>>> is not optimal, but I don''t think it''s going to cause problems, and you >>>>>> could at least see if indirect descriptors solve your problem. >>>>> >>>>> Ok, I copied across those files, built, packaged and installed them on >>>>> my Dom0. Good news is that its a little quicker, bad news is not by >>>>> much. >>>> >>>> Could you try increasing xen_blkif_max_segments variable in >>>> xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only >>>> need to recompile the DomU kernel after this, the Dom0 is able to >>>> support up to 256 indirect segments. >>> >>> I''ll have to look at this. All DomU''s are Scientific Linux 6.4 systems - >>> so essentially RHEL6.4 and so on. I haven''t built a RH kernel as yet - >>> so I''ll have to look at what is involved. It might be as simple as >>> rebuilding a normal SRPM. >> >> Ok, I''ve had a look at the RH xen-blkfront.c - and I can''t see any >> definition of xen_blkif_max_segments - or anything close. I''ve attached >> the version used in the EL6 kernel from the kernel-2.6.32-358.6.1.el6 srpm. >> >> Any ideas on where to go from here? > > I thought you were using the 3.8.x kernel inside the DomU also, if you > are not using it, then it''s normal that there''s no speed difference, you > have a Dom0 kernel that supports indirect descriptors, but your DomU > doesn''t. You must use a kernel that supports indirect descriptors in > both Dom0 and DomU in order to make use of this feature.Ahhh - sorry - I should have been clearer - The Dom0 is kernel 3.8.x (3.8.8 right now) - however the DomUs are all stock EL6 kernels. Hmmmm - I believe the kernel I build for Dom0 *should* work as a DomU. I''ll do some more experimentation and see if I can get it working properly as a DomU kernel. -- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Steven Haigh
2013-Apr-27 08:35 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 27/04/2013 5:51 PM, Steven Haigh wrote:> On 27/04/2013 5:06 PM, Roger Pau Monné wrote: >> On 27/04/13 03:57, Steven Haigh wrote: >>> On 27/04/2013 12:16 AM, Steven Haigh wrote: >>>> On 27/04/2013 12:06 AM, Roger Pau Monné wrote: >>>>> On 23/04/13 21:05, Steven Haigh wrote: >>>>>> Sorry - resending this to Felipe as well - as I started talking to >>>>>> him >>>>>> directly previously. >>>>>> >>>>>> Felipe, to bring you up to date, I''ve copied over the blkback >>>>>> files from >>>>>> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and >>>>>> tested. Results below: >>>>>> >>>> >>>> Bringing this into context in a nutshell - results showed about 5MB/sec >>>> improvement when using buffered disk access - totalling ~57MB/sec write >>>> speed vs ~98MB/sec when using the oflag=direct flag to dd. >>>> >>>> When talking about back porting a few indirect patches to mainline >>>> blkback (3.8.8 atm): >>>>>> On 24/04/2013 4:13 AM, Roger Pau Monné wrote: >>>>>>> I think it requires a non-trivial amount of work, what you could do >>>>>>> as a >>>>>>> test is directly replace the affected files with the ones in my >>>>>>> tree, it >>>>>>> is not optimal, but I don''t think it''s going to cause problems, >>>>>>> and you >>>>>>> could at least see if indirect descriptors solve your problem. >>>>>> >>>>>> Ok, I copied across those files, built, packaged and installed >>>>>> them on >>>>>> my Dom0. Good news is that its a little quicker, bad news is not by >>>>>> much. >>>>> >>>>> Could you try increasing xen_blkif_max_segments variable in >>>>> xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only >>>>> need to recompile the DomU kernel after this, the Dom0 is able to >>>>> support up to 256 indirect segments. >>>> >>>> I''ll have to look at this. All DomU''s are Scientific Linux 6.4 >>>> systems - >>>> so essentially RHEL6.4 and so on. I haven''t built a RH kernel as yet - >>>> so I''ll have to look at what is involved. It might be as simple as >>>> rebuilding a normal SRPM. >>> >>> Ok, I''ve had a look at the RH xen-blkfront.c - and I can''t see any >>> definition of xen_blkif_max_segments - or anything close. I''ve attached >>> the version used in the EL6 kernel from the kernel-2.6.32-358.6.1.el6 >>> srpm. >>> >>> Any ideas on where to go from here? >> >> I thought you were using the 3.8.x kernel inside the DomU also, if you >> are not using it, then it''s normal that there''s no speed difference, you >> have a Dom0 kernel that supports indirect descriptors, but your DomU >> doesn''t. You must use a kernel that supports indirect descriptors in >> both Dom0 and DomU in order to make use of this feature. > > Ahhh - sorry - I should have been clearer - The Dom0 is kernel 3.8.x > (3.8.8 right now) - however the DomUs are all stock EL6 kernels. > > Hmmmm - I believe the kernel I build for Dom0 *should* work as a DomU. > I''ll do some more experimentation and see if I can get it working > properly as a DomU kernel.Ok - now for testing the basics. Same tests using vanilla 3.8.8 kernel: # dd if=/dev/zero of=output.zero bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 37.1206 s, 57.9 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 2.65 0.00 0.22 97.13 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdf 395.81 2201.32 27.59 156.95 1.65 9.21 120.60 1.13 6.12 1.19 22.05 sde 404.86 2208.83 28.04 157.40 1.69 9.24 120.77 1.32 7.15 1.31 24.24 sdc 435.54 2174.83 30.68 155.63 1.82 9.10 120.09 0.97 5.20 1.11 20.64 sdd 388.96 2177.26 26.71 155.41 1.62 9.11 120.74 1.10 6.01 1.30 23.60 md2 0.00 0.00 0.00 537.31 0.00 17.59 67.05 0.00 0.00 0.00 0.00 # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 25.3928 s, 84.6 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.22 0.00 15.74 0.00 0.22 83.81 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdf 336.81 12659.65 10.86 232.59 1.36 50.36 435.06 1.00 4.09 2.51 61.06 sde 1684.04 11000.22 54.32 189.14 6.79 43.71 424.80 1.45 5.95 3.50 85.28 sdc 144.35 11177.61 4.66 238.80 0.58 44.60 380.04 0.41 1.70 1.07 26.08 sdd 20.62 12876.50 0.67 242.79 0.08 51.25 431.80 0.45 1.84 1.15 27.92 md2 0.00 0.00 0.00 2680.71 0.00 86.47 66.06 0.00 0.00 0.00 0.00 Installed and rebooted into the patched version I build by just copying the blkback files across to the 3.8.8 tree and building: # dd if=/dev/zero of=output.zero bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 45.2376 s, 47.5 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 1.35 0.00 0.45 98.19 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdd 1340.80 5806.80 158.20 674.40 5.83 25.27 76.51 6.00 7.16 0.80 66.90 sde 1334.60 5894.00 160.80 686.40 5.86 25.71 76.32 6.87 8.11 0.87 73.52 sdc 1330.80 5858.20 158.00 682.40 5.86 25.60 76.67 5.71 6.81 0.77 64.84 sdf 1341.00 5848.80 157.00 681.20 5.83 25.49 76.53 6.23 7.38 0.85 70.92 md2 0.00 0.00 0.00 1431.40 0.00 46.83 67.01 0.00 0.00 0.00 0.00 # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 38.9052 s, 55.2 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 5.27 0.00 0.32 94.41 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdd 493.20 8481.60 36.80 335.80 2.07 34.45 200.73 1.14 3.07 0.97 36.32 sde 1371.60 7380.20 83.80 304.80 5.66 30.08 188.34 2.20 5.65 1.94 75.38 sdc 540.20 7556.80 56.00 326.20 2.33 30.80 177.52 1.49 3.90 1.26 48.02 sdf 734.20 8286.60 64.40 326.20 3.12 33.67 192.92 1.66 4.24 1.45 56.66 md2 0.00 0.00 0.00 1835.20 0.00 59.20 66.06 0.00 0.00 0.00 0.00 That is with the same kernel running on both Dom0 and DomU. In the dmesg of the DomU, I see the following: blkfront: xvdb: flush diskcache: enabled using persistent grants -- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Roger Pau Monné
2013-Apr-29 08:38 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 27/04/13 10:35, Steven Haigh wrote:> On 27/04/2013 5:51 PM, Steven Haigh wrote: >> On 27/04/2013 5:06 PM, Roger Pau Monné wrote: >>> On 27/04/13 03:57, Steven Haigh wrote: >>>> On 27/04/2013 12:16 AM, Steven Haigh wrote: >>>>> On 27/04/2013 12:06 AM, Roger Pau Monné wrote: >>>>>> On 23/04/13 21:05, Steven Haigh wrote: >>>>>>> Sorry - resending this to Felipe as well - as I started talking to >>>>>>> him >>>>>>> directly previously. >>>>>>> >>>>>>> Felipe, to bring you up to date, I''ve copied over the blkback >>>>>>> files from >>>>>>> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and >>>>>>> tested. Results below: >>>>>>> >>>>> >>>>> Bringing this into context in a nutshell - results showed about 5MB/sec >>>>> improvement when using buffered disk access - totalling ~57MB/sec write >>>>> speed vs ~98MB/sec when using the oflag=direct flag to dd. >>>>> >>>>> When talking about back porting a few indirect patches to mainline >>>>> blkback (3.8.8 atm): >>>>>>> On 24/04/2013 4:13 AM, Roger Pau Monné wrote: >>>>>>>> I think it requires a non-trivial amount of work, what you could do >>>>>>>> as a >>>>>>>> test is directly replace the affected files with the ones in my >>>>>>>> tree, it >>>>>>>> is not optimal, but I don''t think it''s going to cause problems, >>>>>>>> and you >>>>>>>> could at least see if indirect descriptors solve your problem. >>>>>>> >>>>>>> Ok, I copied across those files, built, packaged and installed >>>>>>> them on >>>>>>> my Dom0. Good news is that its a little quicker, bad news is not by >>>>>>> much. >>>>>> >>>>>> Could you try increasing xen_blkif_max_segments variable in >>>>>> xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only >>>>>> need to recompile the DomU kernel after this, the Dom0 is able to >>>>>> support up to 256 indirect segments. >>>>> >>>>> I''ll have to look at this. All DomU''s are Scientific Linux 6.4 >>>>> systems - >>>>> so essentially RHEL6.4 and so on. I haven''t built a RH kernel as yet - >>>>> so I''ll have to look at what is involved. It might be as simple as >>>>> rebuilding a normal SRPM. >>>> >>>> Ok, I''ve had a look at the RH xen-blkfront.c - and I can''t see any >>>> definition of xen_blkif_max_segments - or anything close. I''ve attached >>>> the version used in the EL6 kernel from the kernel-2.6.32-358.6.1.el6 >>>> srpm. >>>> >>>> Any ideas on where to go from here? >>> >>> I thought you were using the 3.8.x kernel inside the DomU also, if you >>> are not using it, then it''s normal that there''s no speed difference, you >>> have a Dom0 kernel that supports indirect descriptors, but your DomU >>> doesn''t. You must use a kernel that supports indirect descriptors in >>> both Dom0 and DomU in order to make use of this feature. >> >> Ahhh - sorry - I should have been clearer - The Dom0 is kernel 3.8.x >> (3.8.8 right now) - however the DomUs are all stock EL6 kernels. >> >> Hmmmm - I believe the kernel I build for Dom0 *should* work as a DomU. >> I''ll do some more experimentation and see if I can get it working >> properly as a DomU kernel. > > Ok - now for testing the basics. > > Same tests using vanilla 3.8.8 kernel: > # dd if=/dev/zero of=output.zero bs=1M count=2048 > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 37.1206 s, 57.9 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.00 0.00 2.65 0.00 0.22 97.13 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdf 395.81 2201.32 27.59 156.95 1.65 9.21 > 120.60 1.13 6.12 1.19 22.05 > sde 404.86 2208.83 28.04 157.40 1.69 9.24 > 120.77 1.32 7.15 1.31 24.24 > sdc 435.54 2174.83 30.68 155.63 1.82 9.10 > 120.09 0.97 5.20 1.11 20.64 > sdd 388.96 2177.26 26.71 155.41 1.62 9.11 > 120.74 1.10 6.01 1.30 23.60 > md2 0.00 0.00 0.00 537.31 0.00 17.59 > 67.05 0.00 0.00 0.00 0.00 > > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 25.3928 s, 84.6 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.22 0.00 15.74 0.00 0.22 83.81 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdf 336.81 12659.65 10.86 232.59 1.36 50.36 > 435.06 1.00 4.09 2.51 61.06 > sde 1684.04 11000.22 54.32 189.14 6.79 43.71 > 424.80 1.45 5.95 3.50 85.28 > sdc 144.35 11177.61 4.66 238.80 0.58 44.60 > 380.04 0.41 1.70 1.07 26.08 > sdd 20.62 12876.50 0.67 242.79 0.08 51.25 > 431.80 0.45 1.84 1.15 27.92 > md2 0.00 0.00 0.00 2680.71 0.00 86.47 > 66.06 0.00 0.00 0.00 0.00 > > Installed and rebooted into the patched version I build by just copying > the blkback files across to the 3.8.8 tree and building:Did you also copy xen-blkfront?> > # dd if=/dev/zero of=output.zero bs=1M count=2048 > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 45.2376 s, 47.5 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.00 0.00 1.35 0.00 0.45 98.19 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 1340.80 5806.80 158.20 674.40 5.83 25.27 > 76.51 6.00 7.16 0.80 66.90 > sde 1334.60 5894.00 160.80 686.40 5.86 25.71 > 76.32 6.87 8.11 0.87 73.52 > sdc 1330.80 5858.20 158.00 682.40 5.86 25.60 > 76.67 5.71 6.81 0.77 64.84 > sdf 1341.00 5848.80 157.00 681.20 5.83 25.49 > 76.53 6.23 7.38 0.85 70.92 > md2 0.00 0.00 0.00 1431.40 0.00 46.83 > 67.01 0.00 0.00 0.00 0.00 > > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 38.9052 s, 55.2 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.00 0.00 5.27 0.00 0.32 94.41 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 493.20 8481.60 36.80 335.80 2.07 34.45 > 200.73 1.14 3.07 0.97 36.32 > sde 1371.60 7380.20 83.80 304.80 5.66 30.08 > 188.34 2.20 5.65 1.94 75.38 > sdc 540.20 7556.80 56.00 326.20 2.33 30.80 > 177.52 1.49 3.90 1.26 48.02 > sdf 734.20 8286.60 64.40 326.20 3.12 33.67 > 192.92 1.66 4.24 1.45 56.66 > md2 0.00 0.00 0.00 1835.20 0.00 59.20 > 66.06 0.00 0.00 0.00 0.00 > > That is with the same kernel running on both Dom0 and DomU. > > In the dmesg of the DomU, I see the following: > blkfront: xvdb: flush diskcache: enabled using persistent grantsIt seems you are missing some pieces, you should see something like: blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled;
Steven Haigh
2013-Apr-29 19:26 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 29/04/2013 6:38 PM, Roger Pau Monné wrote:> Did you also copy xen-blkfront?Dammit! No, no I didn''t. I tried to just copy this back over to the 3.8.8 and 3.8.10 kernel versions, but it came up with too many errors - so I just rebuilt/packages the checkout of your git based on 3.8.0-rc7.> It seems you are missing some pieces, you should see something like: > > blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; > indirect descriptors: enabled;Now I''m running 3.8.0-rc7 from your git on both DomU and Dom0. In the DomU, I now see: blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled; From what you say, this should be what I''d expect. From the DomU: # dd if=/dev/zero of=output.zero bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 32.9252 s, 65.2 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.23 0.00 9.61 0.00 0.46 89.70 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdd 1071.40 7914.87 67.28 479.18 4.40 32.64 138.82 17.45 31.65 2.00 109.36 sde 1151.72 7943.71 68.65 486.73 4.79 33.20 140.10 13.18 23.87 1.93 107.14 sdc 1123.34 7921.05 66.36 482.84 4.66 32.86 139.89 8.80 15.96 1.86 102.31 sdf 1091.53 7937.30 70.02 483.30 4.54 32.97 138.84 18.98 34.31 1.98 109.45 md2 0.00 0.00 0.00 1003.66 0.00 65.31 133.27 0.00 0.00 0.00 0.00 # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.22 0.00 10.94 0.00 0.22 88.62 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdd 20.35 13703.72 1.75 258.64 0.10 54.54 429.75 0.47 1.81 1.13 29.34 sde 1858.64 11655.36 61.05 199.56 7.51 46.36 423.36 1.54 5.89 3.27 85.27 sdc 142.45 11824.07 5.47 254.70 0.59 47.18 376.03 0.42 1.61 1.02 26.59 sdf 332.39 13489.72 11.38 248.80 1.35 53.72 433.47 1.06 4.10 2.50 65.16 md2 0.00 0.00 3.72 733.48 0.06 91.68 254.86 0.00 0.00 0.00 0.00 -- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Steven Haigh
2013-Apr-29 19:47 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 30/04/2013 5:26 AM, Steven Haigh wrote:> On 29/04/2013 6:38 PM, Roger Pau Monné wrote: >> Did you also copy xen-blkfront? > > Dammit! No, no I didn''t. I tried to just copy this back over to the > 3.8.8 and 3.8.10 kernel versions, but it came up with too many errors - > so I just rebuilt/packages the checkout of your git based on 3.8.0-rc7. > >> It seems you are missing some pieces, you should see something like: >> >> blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; >> indirect descriptors: enabled; > > Now I''m running 3.8.0-rc7 from your git on both DomU and Dom0. In the > DomU, I now see: > > blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; > indirect descriptors: enabled; > > From what you say, this should be what I''d expect. > > From the DomU: > # dd if=/dev/zero of=output.zero bs=1M count=2048 > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 32.9252 s, 65.2 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.23 0.00 9.61 0.00 0.46 89.70 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 1071.40 7914.87 67.28 479.18 4.40 32.64 > 138.82 17.45 31.65 2.00 109.36 > sde 1151.72 7943.71 68.65 486.73 4.79 33.20 > 140.10 13.18 23.87 1.93 107.14 > sdc 1123.34 7921.05 66.36 482.84 4.66 32.86 > 139.89 8.80 15.96 1.86 102.31 > sdf 1091.53 7937.30 70.02 483.30 4.54 32.97 > 138.84 18.98 34.31 1.98 109.45 > md2 0.00 0.00 0.00 1003.66 0.00 65.31 > 133.27 0.00 0.00 0.00 0.00 > > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.22 0.00 10.94 0.00 0.22 88.62 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 20.35 13703.72 1.75 258.64 0.10 54.54 > 429.75 0.47 1.81 1.13 29.34 > sde 1858.64 11655.36 61.05 199.56 7.51 46.36 > 423.36 1.54 5.89 3.27 85.27 > sdc 142.45 11824.07 5.47 254.70 0.59 47.18 > 376.03 0.42 1.61 1.02 26.59 > sdf 332.39 13489.72 11.38 248.80 1.35 53.72 > 433.47 1.06 4.10 2.50 65.16 > md2 0.00 0.00 3.72 733.48 0.06 91.68 > 254.86 0.00 0.00 0.00 0.00I just thought - I should probably include a baseline by mounting the same LV in the Dom0 and doing the exact same tests. # dd if=/dev/zero of=output.zero bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 19.1554 s, 112 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 23.18 76.60 0.22 0.00 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdd 139.07 14785.43 11.92 286.98 0.59 58.88 407.50 2.60 8.71 1.84 54.92 sde 83.44 14846.58 8.39 292.05 0.36 59.09 405.23 4.12 13.69 2.56 76.84 sdc 98.23 14828.04 9.93 289.18 0.42 58.84 405.73 2.55 8.45 1.75 52.43 sdf 77.04 14816.78 8.61 289.40 0.33 58.96 407.51 3.89 13.05 2.52 75.14 md2 0.00 0.00 0.00 973.51 0.00 116.72 245.55 0.00 0.00 0.00 0.00 # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 12.22 87.58 0.21 0.00 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdd 32.09 12310.14 1.04 291.10 0.13 49.22 345.99 0.48 1.66 0.91 26.71 sde 1225.88 9931.88 39.54 224.84 4.94 39.70 345.81 1.20 4.53 2.44 64.55 sdc 19.25 11116.15 0.62 266.05 0.08 44.46 342.06 0.41 1.53 0.86 22.94 sdf 1206.63 11122.77 38.92 253.21 4.87 44.51 346.17 1.39 4.78 2.46 71.97 md2 0.00 0.00 0.00 634.37 0.00 79.30 256.00 0.00 0.00 0.00 0.00 This is running the same kernel - 3.8.0-rc7 from your git. And also for the sake completeness, the Dom0 grub.conf: title Scientific Linux (3.8.0-1.el6xen.x86_64) root (hd0,0) kernel /xen.gz dom0_mem=1024M cpufreq=xen dom0_max_vcpus=1 dom0_vcpus_pin module /vmlinuz-3.8.0-1.el6xen.x86_64 ro root=/dev/vg_raid1/xenhost rd_LVM_LV=vg_raid1/xenhost rd_MD_UUID=afb92c19:b9b1e3ae:07af315d:738e38be rd_NO_LUKS rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto quiet panic=5 i915.i915_enable_rc6=7 i915.i915_enable_fbc=1 i915.lvds_downclock=1 drm.vblankoffdelay=1 module /initramfs-3.8.0-1.el6xen.x86_64.img and the DomU config: # cat /etc/xen/zeus.vm name = "zeus.vm" memory = 1024 vcpus = 2 cpus = "1-3" disk = [ ''phy:/dev/vg_raid1/zeus.vm,xvda,w'' , ''phy:/dev/md2,xvdb,w'' ] vif = [ "mac=02:16:36:35:35:09, bridge=br203, vifname=vm.zeus.203", "mac=10:16:36:35:35:09, bridge=br10, vifname=vm.zeus.10" ] bootloader = "pygrub" on_poweroff = ''destroy'' on_reboot = ''restart'' on_crash = ''restart'' All the tests are being done on /dev/md2 (from Dom0) presented as xvdb on the DomU. # cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md2 : active raid6 sdd[5] sdc[4] sdf[1] sde[0] 3906766592 blocks super 1.2 level 6, 128k chunk, algorithm 2 [4/4] [UUUU] -- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Felipe Franciosi
2013-Apr-30 10:07 UTC
Re: IO speed limited by size of IO request (for RBD driver)
I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? domU: # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s dom0: # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol. Cheers, Felipe -----Original Message----- From: Steven Haigh [mailto:netwiz@crc.id.au] Sent: 29 April 2013 20:48 To: Roger Pau Monne Cc: Felipe Franciosi; xen-devel@lists.xen.org Subject: Re: IO speed limited by size of IO request (for RBD driver) On 30/04/2013 5:26 AM, Steven Haigh wrote:> On 29/04/2013 6:38 PM, Roger Pau Monné wrote: >> Did you also copy xen-blkfront? > > Dammit! No, no I didn''t. I tried to just copy this back over to the > 3.8.8 and 3.8.10 kernel versions, but it came up with too many errors > - so I just rebuilt/packages the checkout of your git based on 3.8.0-rc7. > >> It seems you are missing some pieces, you should see something like: >> >> blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; >> indirect descriptors: enabled; > > Now I''m running 3.8.0-rc7 from your git on both DomU and Dom0. In the > DomU, I now see: > > blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; > indirect descriptors: enabled; > > From what you say, this should be what I''d expect. > > From the DomU: > # dd if=/dev/zero of=output.zero bs=1M count=2048 > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 32.9252 s, 65.2 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.23 0.00 9.61 0.00 0.46 89.70 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 1071.40 7914.87 67.28 479.18 4.40 32.64 > 138.82 17.45 31.65 2.00 109.36 > sde 1151.72 7943.71 68.65 486.73 4.79 33.20 > 140.10 13.18 23.87 1.93 107.14 > sdc 1123.34 7921.05 66.36 482.84 4.66 32.86 > 139.89 8.80 15.96 1.86 102.31 > sdf 1091.53 7937.30 70.02 483.30 4.54 32.97 > 138.84 18.98 34.31 1.98 109.45 > md2 0.00 0.00 0.00 1003.66 0.00 65.31 > 133.27 0.00 0.00 0.00 0.00 > > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.22 0.00 10.94 0.00 0.22 88.62 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 20.35 13703.72 1.75 258.64 0.10 54.54 > 429.75 0.47 1.81 1.13 29.34 > sde 1858.64 11655.36 61.05 199.56 7.51 46.36 > 423.36 1.54 5.89 3.27 85.27 > sdc 142.45 11824.07 5.47 254.70 0.59 47.18 > 376.03 0.42 1.61 1.02 26.59 > sdf 332.39 13489.72 11.38 248.80 1.35 53.72 > 433.47 1.06 4.10 2.50 65.16 > md2 0.00 0.00 3.72 733.48 0.06 91.68 > 254.86 0.00 0.00 0.00 0.00I just thought - I should probably include a baseline by mounting the same LV in the Dom0 and doing the exact same tests. # dd if=/dev/zero of=output.zero bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 19.1554 s, 112 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 23.18 76.60 0.22 0.00 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdd 139.07 14785.43 11.92 286.98 0.59 58.88 407.50 2.60 8.71 1.84 54.92 sde 83.44 14846.58 8.39 292.05 0.36 59.09 405.23 4.12 13.69 2.56 76.84 sdc 98.23 14828.04 9.93 289.18 0.42 58.84 405.73 2.55 8.45 1.75 52.43 sdf 77.04 14816.78 8.61 289.40 0.33 58.96 407.51 3.89 13.05 2.52 75.14 md2 0.00 0.00 0.00 973.51 0.00 116.72 245.55 0.00 0.00 0.00 0.00 # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 12.22 87.58 0.21 0.00 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdd 32.09 12310.14 1.04 291.10 0.13 49.22 345.99 0.48 1.66 0.91 26.71 sde 1225.88 9931.88 39.54 224.84 4.94 39.70 345.81 1.20 4.53 2.44 64.55 sdc 19.25 11116.15 0.62 266.05 0.08 44.46 342.06 0.41 1.53 0.86 22.94 sdf 1206.63 11122.77 38.92 253.21 4.87 44.51 346.17 1.39 4.78 2.46 71.97 md2 0.00 0.00 0.00 634.37 0.00 79.30 256.00 0.00 0.00 0.00 0.00 This is running the same kernel - 3.8.0-rc7 from your git. And also for the sake completeness, the Dom0 grub.conf: title Scientific Linux (3.8.0-1.el6xen.x86_64) root (hd0,0) kernel /xen.gz dom0_mem=1024M cpufreq=xen dom0_max_vcpus=1 dom0_vcpus_pin module /vmlinuz-3.8.0-1.el6xen.x86_64 ro root=/dev/vg_raid1/xenhost rd_LVM_LV=vg_raid1/xenhost rd_MD_UUID=afb92c19:b9b1e3ae:07af315d:738e38be rd_NO_LUKS rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto quiet panic=5 i915.i915_enable_rc6=7 i915.i915_enable_fbc=1 i915.lvds_downclock=1 drm.vblankoffdelay=1 module /initramfs-3.8.0-1.el6xen.x86_64.img and the DomU config: # cat /etc/xen/zeus.vm name = "zeus.vm" memory = 1024 vcpus = 2 cpus = "1-3" disk = [ ''phy:/dev/vg_raid1/zeus.vm,xvda,w'' , ''phy:/dev/md2,xvdb,w'' ] vif = [ "mac=02:16:36:35:35:09, bridge=br203, vifname=vm.zeus.203", "mac=10:16:36:35:35:09, bridge=br10, vifname=vm.zeus.10" ] bootloader = "pygrub" on_poweroff = ''destroy'' on_reboot = ''restart'' on_crash = ''restart'' All the tests are being done on /dev/md2 (from Dom0) presented as xvdb on the DomU. # cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md2 : active raid6 sdd[5] sdc[4] sdf[1] sde[0] 3906766592 blocks super 1.2 level 6, 128k chunk, algorithm 2 [4/4] [UUUU] -- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Steven Haigh
2013-Apr-30 10:38 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 30/04/2013 8:07 PM, Felipe Franciosi wrote:> I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. > > Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? > > domU: > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s > > dom0: > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s > > > I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol.Exactly right. The direct write speeds are close enough to native to be perfect. The problem is when *not* using direct mode. You can see from the results when I don''t pass the oflag=direct to dd, the write speeds are 112MB/sec from the Dom0 and 65MB/sec from the DomU. As just about every other method to write on the DomU doesn''t use direct, this becomes the normal write speeds.> > Cheers, > Felipe > > -----Original Message----- > From: Steven Haigh [mailto:netwiz@crc.id.au] > Sent: 29 April 2013 20:48 > To: Roger Pau Monne > Cc: Felipe Franciosi; xen-devel@lists.xen.org > Subject: Re: IO speed limited by size of IO request (for RBD driver) > > On 30/04/2013 5:26 AM, Steven Haigh wrote: >> On 29/04/2013 6:38 PM, Roger Pau Monné wrote: >>> Did you also copy xen-blkfront? >> >> Dammit! No, no I didn''t. I tried to just copy this back over to the >> 3.8.8 and 3.8.10 kernel versions, but it came up with too many errors >> - so I just rebuilt/packages the checkout of your git based on 3.8.0-rc7. >> >>> It seems you are missing some pieces, you should see something like: >>> >>> blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; >>> indirect descriptors: enabled; >> >> Now I''m running 3.8.0-rc7 from your git on both DomU and Dom0. In the >> DomU, I now see: >> >> blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; >> indirect descriptors: enabled; >> >> From what you say, this should be what I''d expect. >> >> From the DomU: >> # dd if=/dev/zero of=output.zero bs=1M count=2048 >> 2048+0 records in >> 2048+0 records out >> 2147483648 bytes (2.1 GB) copied, 32.9252 s, 65.2 MB/s >> >> avg-cpu: %user %nice %system %iowait %steal %idle >> 0.23 0.00 9.61 0.00 0.46 89.70 >> >> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s >> avgrq-sz avgqu-sz await svctm %util >> sdd 1071.40 7914.87 67.28 479.18 4.40 32.64 >> 138.82 17.45 31.65 2.00 109.36 >> sde 1151.72 7943.71 68.65 486.73 4.79 33.20 >> 140.10 13.18 23.87 1.93 107.14 >> sdc 1123.34 7921.05 66.36 482.84 4.66 32.86 >> 139.89 8.80 15.96 1.86 102.31 >> sdf 1091.53 7937.30 70.02 483.30 4.54 32.97 >> 138.84 18.98 34.31 1.98 109.45 >> md2 0.00 0.00 0.00 1003.66 0.00 65.31 >> 133.27 0.00 0.00 0.00 0.00 >> >> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >> 2048+0 records in >> 2048+0 records out >> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s >> >> avg-cpu: %user %nice %system %iowait %steal %idle >> 0.22 0.00 10.94 0.00 0.22 88.62 >> >> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s >> avgrq-sz avgqu-sz await svctm %util >> sdd 20.35 13703.72 1.75 258.64 0.10 54.54 >> 429.75 0.47 1.81 1.13 29.34 >> sde 1858.64 11655.36 61.05 199.56 7.51 46.36 >> 423.36 1.54 5.89 3.27 85.27 >> sdc 142.45 11824.07 5.47 254.70 0.59 47.18 >> 376.03 0.42 1.61 1.02 26.59 >> sdf 332.39 13489.72 11.38 248.80 1.35 53.72 >> 433.47 1.06 4.10 2.50 65.16 >> md2 0.00 0.00 3.72 733.48 0.06 91.68 >> 254.86 0.00 0.00 0.00 0.00 > > I just thought - I should probably include a baseline by mounting the same LV in the Dom0 and doing the exact same tests. > > # dd if=/dev/zero of=output.zero bs=1M count=2048 > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 19.1554 s, 112 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.00 0.00 23.18 76.60 0.22 0.00 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 139.07 14785.43 11.92 286.98 0.59 58.88 > 407.50 2.60 8.71 1.84 54.92 > sde 83.44 14846.58 8.39 292.05 0.36 59.09 > 405.23 4.12 13.69 2.56 76.84 > sdc 98.23 14828.04 9.93 289.18 0.42 58.84 > 405.73 2.55 8.45 1.75 52.43 > sdf 77.04 14816.78 8.61 289.40 0.33 58.96 > 407.51 3.89 13.05 2.52 75.14 > md2 0.00 0.00 0.00 973.51 0.00 116.72 > 245.55 0.00 0.00 0.00 0.00 > > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.00 0.00 12.22 87.58 0.21 0.00 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 32.09 12310.14 1.04 291.10 0.13 49.22 > 345.99 0.48 1.66 0.91 26.71 > sde 1225.88 9931.88 39.54 224.84 4.94 39.70 > 345.81 1.20 4.53 2.44 64.55 > sdc 19.25 11116.15 0.62 266.05 0.08 44.46 > 342.06 0.41 1.53 0.86 22.94 > sdf 1206.63 11122.77 38.92 253.21 4.87 44.51 > 346.17 1.39 4.78 2.46 71.97 > md2 0.00 0.00 0.00 634.37 0.00 79.30 > 256.00 0.00 0.00 0.00 0.00 > > This is running the same kernel - 3.8.0-rc7 from your git. > > And also for the sake completeness, the Dom0 grub.conf: > title Scientific Linux (3.8.0-1.el6xen.x86_64) > root (hd0,0) > kernel /xen.gz dom0_mem=1024M cpufreq=xen dom0_max_vcpus=1 dom0_vcpus_pin > module /vmlinuz-3.8.0-1.el6xen.x86_64 ro root=/dev/vg_raid1/xenhost rd_LVM_LV=vg_raid1/xenhost rd_MD_UUID=afb92c19:b9b1e3ae:07af315d:738e38be rd_NO_LUKS rd_NO_DM > LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto quiet panic=5 i915.i915_enable_rc6=7 > i915.i915_enable_fbc=1 i915.lvds_downclock=1 drm.vblankoffdelay=1 > module /initramfs-3.8.0-1.el6xen.x86_64.img > > and the DomU config: > # cat /etc/xen/zeus.vm > name = "zeus.vm" > memory = 1024 > vcpus = 2 > cpus = "1-3" > disk = [ ''phy:/dev/vg_raid1/zeus.vm,xvda,w'' , > ''phy:/dev/md2,xvdb,w'' ] > vif = [ "mac=02:16:36:35:35:09, bridge=br203, > vifname=vm.zeus.203", "mac=10:16:36:35:35:09, bridge=br10, vifname=vm.zeus.10" ] > bootloader = "pygrub" > > on_poweroff = ''destroy'' > on_reboot = ''restart'' > on_crash = ''restart'' > > All the tests are being done on /dev/md2 (from Dom0) presented as xvdb on the DomU. > # cat /proc/mdstat > Personalities : [raid1] [raid6] [raid5] [raid4] > md2 : active raid6 sdd[5] sdc[4] sdf[1] sde[0] > 3906766592 blocks super 1.2 level 6, 128k chunk, algorithm 2 [4/4] [UUUU] > > -- > Steven Haigh > > Email: netwiz@crc.id.au > Web: https://www.crc.id.au > Phone: (03) 9001 6090 - 0412 935 897 > Fax: (03) 8338 0299 >-- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Steven Haigh
2013-May-08 08:20 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 30/04/2013 8:07 PM, Felipe Franciosi wrote:> I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. > > Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? > > domU: > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s > > dom0: > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s > > > I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol.Just wondering if there is any further input on this... While DIRECT writes are as good as can be expected, NON-DIRECT writes in certain cases (specifically with a mdadm raid in the Dom0) are affected by about a 50% loss in throughput... The hard part is that this is the default mode of writing! -- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Roger Pau Monné
2013-May-08 08:33 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 08/05/13 10:20, Steven Haigh wrote:> On 30/04/2013 8:07 PM, Felipe Franciosi wrote: >> I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. >> >> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? >> >> domU: >> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >> 2048+0 records in >> 2048+0 records out >> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s >> >> dom0: >> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >> 2048+0 records in >> 2048+0 records out >> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s >> >> >> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol. > > Just wondering if there is any further input on this... While DIRECT > writes are as good as can be expected, NON-DIRECT writes in certain > cases (specifically with a mdadm raid in the Dom0) are affected by about > a 50% loss in throughput... > > The hard part is that this is the default mode of writing!As another test with indirect descriptors, could you change xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default), recompile the DomU kernel and see if that helps? Thanks, Roger.
Steven Haigh
2013-May-08 08:47 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 8/05/2013 6:33 PM, Roger Pau Monné wrote:> On 08/05/13 10:20, Steven Haigh wrote: >> On 30/04/2013 8:07 PM, Felipe Franciosi wrote: >>> I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. >>> >>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? >>> >>> domU: >>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>> 2048+0 records in >>> 2048+0 records out >>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s >>> >>> dom0: >>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>> 2048+0 records in >>> 2048+0 records out >>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s >>> >>> >>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol. >> >> Just wondering if there is any further input on this... While DIRECT >> writes are as good as can be expected, NON-DIRECT writes in certain >> cases (specifically with a mdadm raid in the Dom0) are affected by about >> a 50% loss in throughput... >> >> The hard part is that this is the default mode of writing! > > As another test with indirect descriptors, could you change > xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default), > recompile the DomU kernel and see if that helps?Ok, I''ll get onto this... One thing I thought I''d try - as the RAID6 is only assembled in the Dom0 then passed to the DomU as /dev/md2 - I wondered what would happen if I passed all the member drives directly to the DomU and let the DomU take care of the RAID6 info... So - I changed the DomU config as such: disk = [ ''phy:/dev/vg_raid1/zeus.vm,xvda,w'' , ''phy:/dev/sdc,xvdc,w'' , ''phy:/dev/sdd,xvdd,w'' , ''phy:/dev/sde,xvde,w'' , ''phy:/dev/sdf,xvdf,w'' ] I then assembled the RAID6 on the DomU using mdadm: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md127 : active raid6 xvdf[1] xvde[0] xvdd[5] xvdc[4] 3906766592 blocks super 1.2 level 6, 128k chunk, algorithm 2 [4/4] [UUUU] # dd if=/dev/zero of=output.zero bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 35.4581 s, 60.6 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.54 0.00 11.76 0.00 0.68 87.03 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdf 0.00 0.00 16.89 2832.70 0.44 36.42 26.49 17.46 6.12 0.36 103.82 sdc 0.00 0.00 14.73 2876.49 0.39 36.36 26.03 19.57 6.77 0.38 108.50 sde 0.00 0.00 20.68 2692.70 0.50 36.40 27.85 17.97 6.62 0.40 109.07 sdd 0.00 0.00 11.76 2846.22 0.35 36.36 26.30 19.36 6.76 0.37 106.14 # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 53.4774 s, 40.2 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.49 0.00 14.64 0.00 0.62 84.26 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdf 0.00 0.00 614.88 1382.90 5.08 21.85 27.61 10.12 5.07 0.39 77.70 sdc 0.00 0.00 16.73 2800.86 0.09 26.46 19.30 13.51 4.79 0.28 77.64 sde 0.00 0.00 25.95 2762.24 0.19 21.76 16.12 3.04 1.09 0.12 32.76 sdb 0.00 0.00 0.00 1.97 0.00 0.01 5.75 0.01 7.00 6.63 1.30 sdd 0.00 0.00 6.03 2831.61 0.02 26.62 19.23 14.11 5.01 0.28 80.58 Interesting that doing this destroys the direct writing - however doesn''t seem to affect the non-direct. (As a side note, this is using the stock EL6 kernel as DomU and vanilla 3.8.10 as the Dom0. Will do the other research now... -- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Steven Haigh
2013-May-08 10:32 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 8/05/2013 6:33 PM, Roger Pau Monné wrote:> On 08/05/13 10:20, Steven Haigh wrote: >> On 30/04/2013 8:07 PM, Felipe Franciosi wrote: >>> I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. >>> >>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? >>> >>> domU: >>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>> 2048+0 records in >>> 2048+0 records out >>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s >>> >>> dom0: >>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>> 2048+0 records in >>> 2048+0 records out >>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s >>> >>> >>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol. >> >> Just wondering if there is any further input on this... While DIRECT >> writes are as good as can be expected, NON-DIRECT writes in certain >> cases (specifically with a mdadm raid in the Dom0) are affected by about >> a 50% loss in throughput... >> >> The hard part is that this is the default mode of writing! > > As another test with indirect descriptors, could you change > xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default), > recompile the DomU kernel and see if that helps?Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2 is running on both the Dom0 and DomU. # dd if=/dev/zero of=output.zero bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.34 0.00 17.10 0.00 0.23 82.33 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdd 980.97 11936.47 53.11 429.78 4.00 48.77 223.81 12.75 26.10 2.11 101.79 sdc 872.71 11957.87 45.98 435.67 3.55 49.30 224.71 13.77 28.43 2.11 101.49 sde 949.26 11981.88 51.30 429.33 3.91 48.90 225.03 21.29 43.91 2.27 109.08 sdf 915.52 11968.52 48.58 428.88 3.73 48.92 225.84 21.44 44.68 2.27 108.56 md2 0.00 0.00 0.00 1155.61 0.00 97.51 172.80 0.00 0.00 0.00 0.00 # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s avg-cpu: %user %nice %system %iowait %steal %idle 0.11 0.00 13.92 0.00 0.22 85.75 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdd 0.00 13986.08 0.00 263.20 0.00 55.76 433.87 0.43 1.63 1.07 28.27 sdc 202.10 13741.55 6.52 256.57 0.81 54.77 432.65 0.50 1.88 1.25 32.78 sde 47.96 11437.57 1.55 261.77 0.19 45.79 357.63 0.80 3.02 1.85 48.60 sdf 2233.37 11756.13 71.93 191.38 8.99 46.80 433.90 1.49 5.66 3.27 86.15 md2 0.00 0.00 0.00 731.93 0.00 91.49 256.00 0.00 0.00 0.00 0.00 Now this is pretty much exactly what I would expect the system to do.... ~96MB/sec buffered, and 85MB/sec direct. So - it turns out that xen_blkif_max_segments at 32 is a killer in the DomU. Now it makes me wonder what we can do about this in kernels that don''t have your series of patches against it? And also about the backend stuff in 3.8.x etc? -- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Roger Pau Monné
2013-May-08 10:45 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 08/05/13 12:32, Steven Haigh wrote:> On 8/05/2013 6:33 PM, Roger Pau Monné wrote: >> On 08/05/13 10:20, Steven Haigh wrote: >>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote: >>>> I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. >>>> >>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? >>>> >>>> domU: >>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>>> 2048+0 records in >>>> 2048+0 records out >>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s >>>> >>>> dom0: >>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>>> 2048+0 records in >>>> 2048+0 records out >>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s >>>> >>>> >>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol. >>> >>> Just wondering if there is any further input on this... While DIRECT >>> writes are as good as can be expected, NON-DIRECT writes in certain >>> cases (specifically with a mdadm raid in the Dom0) are affected by about >>> a 50% loss in throughput... >>> >>> The hard part is that this is the default mode of writing! >> >> As another test with indirect descriptors, could you change >> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default), >> recompile the DomU kernel and see if that helps? > > Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2 is > running on both the Dom0 and DomU. > > # dd if=/dev/zero of=output.zero bs=1M count=2048 > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.34 0.00 17.10 0.00 0.23 82.33 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 980.97 11936.47 53.11 429.78 4.00 48.77 > 223.81 12.75 26.10 2.11 101.79 > sdc 872.71 11957.87 45.98 435.67 3.55 49.30 > 224.71 13.77 28.43 2.11 101.49 > sde 949.26 11981.88 51.30 429.33 3.91 48.90 > 225.03 21.29 43.91 2.27 109.08 > sdf 915.52 11968.52 48.58 428.88 3.73 48.92 > 225.84 21.44 44.68 2.27 108.56 > md2 0.00 0.00 0.00 1155.61 0.00 97.51 > 172.80 0.00 0.00 0.00 0.00 > > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.11 0.00 13.92 0.00 0.22 85.75 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 0.00 13986.08 0.00 263.20 0.00 55.76 > 433.87 0.43 1.63 1.07 28.27 > sdc 202.10 13741.55 6.52 256.57 0.81 54.77 > 432.65 0.50 1.88 1.25 32.78 > sde 47.96 11437.57 1.55 261.77 0.19 45.79 > 357.63 0.80 3.02 1.85 48.60 > sdf 2233.37 11756.13 71.93 191.38 8.99 46.80 > 433.90 1.49 5.66 3.27 86.15 > md2 0.00 0.00 0.00 731.93 0.00 91.49 > 256.00 0.00 0.00 0.00 0.00 > > Now this is pretty much exactly what I would expect the system to do.... > ~96MB/sec buffered, and 85MB/sec direct.I''m sorry to be such a PITA, but could you also try with 64? If we have to increase the maximum number of indirect descriptors I would like to set it to the lowest value that provides good performance to prevent using too much memory.> So - it turns out that xen_blkif_max_segments at 32 is a killer in the > DomU. Now it makes me wonder what we can do about this in kernels that > don''t have your series of patches against it? And also about the backend > stuff in 3.8.x etc?There isn''t much we can do regarding kernels without indirect descriptors, there''s no easy way to increase the number of segments in a request.
Felipe Franciosi
2013-May-08 11:14 UTC
Re: IO speed limited by size of IO request (for RBD driver)
However we didn''t "prove" it properly, I think it is worth mentioning that this boils down to what we originally thought it was: Steven''s environment is writing to a filesystem in the guest. On top of that, it''s using the guest''s buffer cache to do the writes. This means that we cannot (easily?) control how the cache and the fs are flushing these writes through blkfront/blkback. In other words, it''s very likely that it generates a workload that simply doesn''t perform well on the "stock" PV protocol. This is a good example of how indirect descriptors help (remembering Roger and I were struggling to find use cases where indirect descriptors showed a substantial gain). Cheers, Felipe -----Original Message----- From: Roger Pau Monne Sent: 08 May 2013 11:45 To: Steven Haigh Cc: Felipe Franciosi; xen-devel@lists.xen.org Subject: Re: IO speed limited by size of IO request (for RBD driver) On 08/05/13 12:32, Steven Haigh wrote:> On 8/05/2013 6:33 PM, Roger Pau Monné wrote: >> On 08/05/13 10:20, Steven Haigh wrote: >>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote: >>>> I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. >>>> >>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? >>>> >>>> domU: >>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>>> 2048+0 records in >>>> 2048+0 records out >>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s >>>> >>>> dom0: >>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>>> 2048+0 records in >>>> 2048+0 records out >>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s >>>> >>>> >>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol. >>> >>> Just wondering if there is any further input on this... While DIRECT >>> writes are as good as can be expected, NON-DIRECT writes in certain >>> cases (specifically with a mdadm raid in the Dom0) are affected by >>> about a 50% loss in throughput... >>> >>> The hard part is that this is the default mode of writing! >> >> As another test with indirect descriptors, could you change >> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by >> default), recompile the DomU kernel and see if that helps? > > Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2 > is running on both the Dom0 and DomU. > > # dd if=/dev/zero of=output.zero bs=1M count=2048 > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.34 0.00 17.10 0.00 0.23 82.33 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 980.97 11936.47 53.11 429.78 4.00 48.77 > 223.81 12.75 26.10 2.11 101.79 > sdc 872.71 11957.87 45.98 435.67 3.55 49.30 > 224.71 13.77 28.43 2.11 101.49 > sde 949.26 11981.88 51.30 429.33 3.91 48.90 > 225.03 21.29 43.91 2.27 109.08 > sdf 915.52 11968.52 48.58 428.88 3.73 48.92 > 225.84 21.44 44.68 2.27 108.56 > md2 0.00 0.00 0.00 1155.61 0.00 97.51 > 172.80 0.00 0.00 0.00 0.00 > > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > 2048+0 records in > 2048+0 records out > 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s > > avg-cpu: %user %nice %system %iowait %steal %idle > 0.11 0.00 13.92 0.00 0.22 85.75 > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > avgrq-sz avgqu-sz await svctm %util > sdd 0.00 13986.08 0.00 263.20 0.00 55.76 > 433.87 0.43 1.63 1.07 28.27 > sdc 202.10 13741.55 6.52 256.57 0.81 54.77 > 432.65 0.50 1.88 1.25 32.78 > sde 47.96 11437.57 1.55 261.77 0.19 45.79 > 357.63 0.80 3.02 1.85 48.60 > sdf 2233.37 11756.13 71.93 191.38 8.99 46.80 > 433.90 1.49 5.66 3.27 86.15 > md2 0.00 0.00 0.00 731.93 0.00 91.49 > 256.00 0.00 0.00 0.00 0.00 > > Now this is pretty much exactly what I would expect the system to do.... > ~96MB/sec buffered, and 85MB/sec direct.I''m sorry to be such a PITA, but could you also try with 64? If we have to increase the maximum number of indirect descriptors I would like to set it to the lowest value that provides good performance to prevent using too much memory.> So - it turns out that xen_blkif_max_segments at 32 is a killer in the > DomU. Now it makes me wonder what we can do about this in kernels that > don''t have your series of patches against it? And also about the > backend stuff in 3.8.x etc?There isn''t much we can do regarding kernels without indirect descriptors, there''s no easy way to increase the number of segments in a request.
Steven Haigh
2013-May-08 12:56 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 05/08/2013 08:45 PM, Roger Pau Monné wrote:> On 08/05/13 12:32, Steven Haigh wrote: >> On 8/05/2013 6:33 PM, Roger Pau Monné wrote: >>> On 08/05/13 10:20, Steven Haigh wrote: >>>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote: >>>>> I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. >>>>> >>>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? >>>>> >>>>> domU: >>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>>>> 2048+0 records in >>>>> 2048+0 records out >>>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s >>>>> >>>>> dom0: >>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>>>> 2048+0 records in >>>>> 2048+0 records out >>>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s >>>>> >>>>> >>>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol. >>>> >>>> Just wondering if there is any further input on this... While DIRECT >>>> writes are as good as can be expected, NON-DIRECT writes in certain >>>> cases (specifically with a mdadm raid in the Dom0) are affected by about >>>> a 50% loss in throughput... >>>> >>>> The hard part is that this is the default mode of writing! >>> >>> As another test with indirect descriptors, could you change >>> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default), >>> recompile the DomU kernel and see if that helps? >> >> Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2 is >> running on both the Dom0 and DomU. >> >> # dd if=/dev/zero of=output.zero bs=1M count=2048 >> 2048+0 records in >> 2048+0 records out >> 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s >> >> avg-cpu: %user %nice %system %iowait %steal %idle >> 0.34 0.00 17.10 0.00 0.23 82.33 >> >> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s >> avgrq-sz avgqu-sz await svctm %util >> sdd 980.97 11936.47 53.11 429.78 4.00 48.77 >> 223.81 12.75 26.10 2.11 101.79 >> sdc 872.71 11957.87 45.98 435.67 3.55 49.30 >> 224.71 13.77 28.43 2.11 101.49 >> sde 949.26 11981.88 51.30 429.33 3.91 48.90 >> 225.03 21.29 43.91 2.27 109.08 >> sdf 915.52 11968.52 48.58 428.88 3.73 48.92 >> 225.84 21.44 44.68 2.27 108.56 >> md2 0.00 0.00 0.00 1155.61 0.00 97.51 >> 172.80 0.00 0.00 0.00 0.00 >> >> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >> 2048+0 records in >> 2048+0 records out >> 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s >> >> avg-cpu: %user %nice %system %iowait %steal %idle >> 0.11 0.00 13.92 0.00 0.22 85.75 >> >> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s >> avgrq-sz avgqu-sz await svctm %util >> sdd 0.00 13986.08 0.00 263.20 0.00 55.76 >> 433.87 0.43 1.63 1.07 28.27 >> sdc 202.10 13741.55 6.52 256.57 0.81 54.77 >> 432.65 0.50 1.88 1.25 32.78 >> sde 47.96 11437.57 1.55 261.77 0.19 45.79 >> 357.63 0.80 3.02 1.85 48.60 >> sdf 2233.37 11756.13 71.93 191.38 8.99 46.80 >> 433.90 1.49 5.66 3.27 86.15 >> md2 0.00 0.00 0.00 731.93 0.00 91.49 >> 256.00 0.00 0.00 0.00 0.00 >> >> Now this is pretty much exactly what I would expect the system to do.... >> ~96MB/sec buffered, and 85MB/sec direct. > > I''m sorry to be such a PITA, but could you also try with 64? If we have > to increase the maximum number of indirect descriptors I would like to > set it to the lowest value that provides good performance to prevent > using too much memory. Compiled with 64: # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 25.2078 s, 85.2 MB/s # dd if=/dev/zero of=output.zero bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 22.0265 s, 97.5 MB/s >> So - it turns out that xen_blkif_max_segments at 32 is a killer in the >> DomU. Now it makes me wonder what we can do about this in kernels that >> don''t have your series of patches against it? And also about the backend >> stuff in 3.8.x etc? > > There isn''t much we can do regarding kernels without indirect > descriptors, there''s no easy way to increase the number of segments in a > request. I wonder if this is something that could go into vanilla kernel 3.9 - then maybe we can get the vendors (RH etc) to back port this into their EL6 kernels... I''m happy to hassle the vendors if we can move forwards on getting the newer indirect stuff in there? As far as I''m concerned its worth its weight in gold. -- Steven Haigh Email: netwiz@crc.id.au Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299
Konrad Rzeszutek Wilk
2013-May-22 20:13 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On Wed, May 08, 2013 at 11:14:26AM +0000, Felipe Franciosi wrote:> However we didn''t "prove" it properly, I think it is worth mentioning that this boils down to what we originally thought it was: > Steven''s environment is writing to a filesystem in the guest. On top of that, it''s using the guest''s buffer cache to do the writes.If he is using O_DIRECT it bypasses the cache in the guest.> This means that we cannot (easily?) control how the cache and the fs are flushing these writes through blkfront/blkback. > > In other words, it''s very likely that it generates a workload that simply doesn''t perform well on the "stock" PV protocol. > This is a good example of how indirect descriptors help (remembering Roger and I were struggling to find use cases where indirect descriptors showed a substantial gain). > > Cheers, > Felipe > > -----Original Message----- > From: Roger Pau Monne > Sent: 08 May 2013 11:45 > To: Steven Haigh > Cc: Felipe Franciosi; xen-devel@lists.xen.org > Subject: Re: IO speed limited by size of IO request (for RBD driver) > > On 08/05/13 12:32, Steven Haigh wrote: > > On 8/05/2013 6:33 PM, Roger Pau Monné wrote: > >> On 08/05/13 10:20, Steven Haigh wrote: > >>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote: > >>>> I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. > >>>> > >>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? > >>>> > >>>> domU: > >>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > >>>> 2048+0 records in > >>>> 2048+0 records out > >>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s > >>>> > >>>> dom0: > >>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > >>>> 2048+0 records in > >>>> 2048+0 records out > >>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s > >>>> > >>>> > >>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol. > >>> > >>> Just wondering if there is any further input on this... While DIRECT > >>> writes are as good as can be expected, NON-DIRECT writes in certain > >>> cases (specifically with a mdadm raid in the Dom0) are affected by > >>> about a 50% loss in throughput... > >>> > >>> The hard part is that this is the default mode of writing! > >> > >> As another test with indirect descriptors, could you change > >> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by > >> default), recompile the DomU kernel and see if that helps? > > > > Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2 > > is running on both the Dom0 and DomU. > > > > # dd if=/dev/zero of=output.zero bs=1M count=2048 > > 2048+0 records in > > 2048+0 records out > > 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s > > > > avg-cpu: %user %nice %system %iowait %steal %idle > > 0.34 0.00 17.10 0.00 0.23 82.33 > > > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > > avgrq-sz avgqu-sz await svctm %util > > sdd 980.97 11936.47 53.11 429.78 4.00 48.77 > > 223.81 12.75 26.10 2.11 101.79 > > sdc 872.71 11957.87 45.98 435.67 3.55 49.30 > > 224.71 13.77 28.43 2.11 101.49 > > sde 949.26 11981.88 51.30 429.33 3.91 48.90 > > 225.03 21.29 43.91 2.27 109.08 > > sdf 915.52 11968.52 48.58 428.88 3.73 48.92 > > 225.84 21.44 44.68 2.27 108.56 > > md2 0.00 0.00 0.00 1155.61 0.00 97.51 > > 172.80 0.00 0.00 0.00 0.00 > > > > # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > > 2048+0 records in > > 2048+0 records out > > 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s > > > > avg-cpu: %user %nice %system %iowait %steal %idle > > 0.11 0.00 13.92 0.00 0.22 85.75 > > > > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > > avgrq-sz avgqu-sz await svctm %util > > sdd 0.00 13986.08 0.00 263.20 0.00 55.76 > > 433.87 0.43 1.63 1.07 28.27 > > sdc 202.10 13741.55 6.52 256.57 0.81 54.77 > > 432.65 0.50 1.88 1.25 32.78 > > sde 47.96 11437.57 1.55 261.77 0.19 45.79 > > 357.63 0.80 3.02 1.85 48.60 > > sdf 2233.37 11756.13 71.93 191.38 8.99 46.80 > > 433.90 1.49 5.66 3.27 86.15 > > md2 0.00 0.00 0.00 731.93 0.00 91.49 > > 256.00 0.00 0.00 0.00 0.00 > > > > Now this is pretty much exactly what I would expect the system to do.... > > ~96MB/sec buffered, and 85MB/sec direct. > > I''m sorry to be such a PITA, but could you also try with 64? If we have to increase the maximum number of indirect descriptors I would like to set it to the lowest value that provides good performance to prevent using too much memory. > > > So - it turns out that xen_blkif_max_segments at 32 is a killer in the > > DomU. Now it makes me wonder what we can do about this in kernels that > > don''t have your series of patches against it? And also about the > > backend stuff in 3.8.x etc? > > There isn''t much we can do regarding kernels without indirect descriptors, there''s no easy way to increase the number of segments in a request. > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel >
Felipe Franciosi
2013-May-23 07:22 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On 22 May 2013, at 21:13, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com> wrote:> On Wed, May 08, 2013 at 11:14:26AM +0000, Felipe Franciosi wrote: >> However we didn''t "prove" it properly, I think it is worth mentioning that this boils down to what we originally thought it was: >> Steven''s environment is writing to a filesystem in the guest. On top of that, it''s using the guest''s buffer cache to do the writes. > > If he is using O_DIRECT it bypasses the cache in the guest.Certainly, but the issues were when _not_ using O_DIRECT. F> >> This means that we cannot (easily?) control how the cache and the fs are flushing these writes through blkfront/blkback. >> >> In other words, it''s very likely that it generates a workload that simply doesn''t perform well on the "stock" PV protocol. >> This is a good example of how indirect descriptors help (remembering Roger and I were struggling to find use cases where indirect descriptors showed a substantial gain). >> >> Cheers, >> Felipe >> >> -----Original Message----- >> From: Roger Pau Monne >> Sent: 08 May 2013 11:45 >> To: Steven Haigh >> Cc: Felipe Franciosi; xen-devel@lists.xen.org >> Subject: Re: IO speed limited by size of IO request (for RBD driver) >> >> On 08/05/13 12:32, Steven Haigh wrote: >>> On 8/05/2013 6:33 PM, Roger Pau Monné wrote: >>>> On 08/05/13 10:20, Steven Haigh wrote: >>>>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote: >>>>>> I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. >>>>>> >>>>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? >>>>>> >>>>>> domU: >>>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>>>>> 2048+0 records in >>>>>> 2048+0 records out >>>>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s >>>>>> >>>>>> dom0: >>>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>>>>> 2048+0 records in >>>>>> 2048+0 records out >>>>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s >>>>>> >>>>>> >>>>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol. >>>>> >>>>> Just wondering if there is any further input on this... While DIRECT >>>>> writes are as good as can be expected, NON-DIRECT writes in certain >>>>> cases (specifically with a mdadm raid in the Dom0) are affected by >>>>> about a 50% loss in throughput... >>>>> >>>>> The hard part is that this is the default mode of writing! >>>> >>>> As another test with indirect descriptors, could you change >>>> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by >>>> default), recompile the DomU kernel and see if that helps? >>> >>> Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2 >>> is running on both the Dom0 and DomU. >>> >>> # dd if=/dev/zero of=output.zero bs=1M count=2048 >>> 2048+0 records in >>> 2048+0 records out >>> 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s >>> >>> avg-cpu: %user %nice %system %iowait %steal %idle >>> 0.34 0.00 17.10 0.00 0.23 82.33 >>> >>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s >>> avgrq-sz avgqu-sz await svctm %util >>> sdd 980.97 11936.47 53.11 429.78 4.00 48.77 >>> 223.81 12.75 26.10 2.11 101.79 >>> sdc 872.71 11957.87 45.98 435.67 3.55 49.30 >>> 224.71 13.77 28.43 2.11 101.49 >>> sde 949.26 11981.88 51.30 429.33 3.91 48.90 >>> 225.03 21.29 43.91 2.27 109.08 >>> sdf 915.52 11968.52 48.58 428.88 3.73 48.92 >>> 225.84 21.44 44.68 2.27 108.56 >>> md2 0.00 0.00 0.00 1155.61 0.00 97.51 >>> 172.80 0.00 0.00 0.00 0.00 >>> >>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >>> 2048+0 records in >>> 2048+0 records out >>> 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s >>> >>> avg-cpu: %user %nice %system %iowait %steal %idle >>> 0.11 0.00 13.92 0.00 0.22 85.75 >>> >>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s >>> avgrq-sz avgqu-sz await svctm %util >>> sdd 0.00 13986.08 0.00 263.20 0.00 55.76 >>> 433.87 0.43 1.63 1.07 28.27 >>> sdc 202.10 13741.55 6.52 256.57 0.81 54.77 >>> 432.65 0.50 1.88 1.25 32.78 >>> sde 47.96 11437.57 1.55 261.77 0.19 45.79 >>> 357.63 0.80 3.02 1.85 48.60 >>> sdf 2233.37 11756.13 71.93 191.38 8.99 46.80 >>> 433.90 1.49 5.66 3.27 86.15 >>> md2 0.00 0.00 0.00 731.93 0.00 91.49 >>> 256.00 0.00 0.00 0.00 0.00 >>> >>> Now this is pretty much exactly what I would expect the system to do.... >>> ~96MB/sec buffered, and 85MB/sec direct. >> >> I''m sorry to be such a PITA, but could you also try with 64? If we have to increase the maximum number of indirect descriptors I would like to set it to the lowest value that provides good performance to prevent using too much memory. >> >>> So - it turns out that xen_blkif_max_segments at 32 is a killer in the >>> DomU. Now it makes me wonder what we can do about this in kernels that >>> don''t have your series of patches against it? And also about the >>> backend stuff in 3.8.x etc? >> >> There isn''t much we can do regarding kernels without indirect descriptors, there''s no easy way to increase the number of segments in a request. >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xen.org >> http://lists.xen.org/xen-devel >>
Konrad Rzeszutek Wilk
2013-May-24 14:29 UTC
Re: IO speed limited by size of IO request (for RBD driver)
On Thu, May 23, 2013 at 07:22:27AM +0000, Felipe Franciosi wrote:> > > On 22 May 2013, at 21:13, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com> wrote: > > > On Wed, May 08, 2013 at 11:14:26AM +0000, Felipe Franciosi wrote: > >> However we didn''t "prove" it properly, I think it is worth mentioning that this boils down to what we originally thought it was: > >> Steven''s environment is writing to a filesystem in the guest. On top of that, it''s using the guest''s buffer cache to do the writes. > > > > If he is using O_DIRECT it bypasses the cache in the guest. > > Certainly, but the issues were when _not_ using O_DIRECT.I am confused. Are the feature-indirect-descriptor making it worst or better when !O_DIRECT? Or are there no difference when using !O_DIRECT with the feature-indirect-descriptor?> > F > > > > > >> This means that we cannot (easily?) control how the cache and the fs are flushing these writes through blkfront/blkback.echo 3 > /proc/..something/drop_cache does it?> >> > >> In other words, it''s very likely that it generates a workload that simply doesn''t perform well on the "stock" PV protocol.''fio'' is an excellent tool to run the tests without using the cache.> >> This is a good example of how indirect descriptors help (remembering Roger and I were struggling to find use cases where indirect descriptors showed a substantial gain).You mean using the O_DIRECT? Yes, all tests that involve any I/O should use O_DIRECT. Otherwise they are misleading. And my understanding from this thread that Steven did that and found that: a) without the feature-indirect-descriptor - the I/O was sucky b) with the initial feature-indirect-descriptior - the I/O was less sucky c) with the feature-indirect-descriptor and a tweak to the frontend of how mant segments to use - the I/O was the same as on baremetal. Sorry about being soo verbose here - I feel that I am missing something here and I am not exactly sure what this is. Could you please enlighten me?> >> > >> Cheers, > >> Felipe > >> > >> -----Original Message----- > >> From: Roger Pau Monne > >> Sent: 08 May 2013 11:45 > >> To: Steven Haigh > >> Cc: Felipe Franciosi; xen-devel@lists.xen.org > >> Subject: Re: IO speed limited by size of IO request (for RBD driver) > >> > >> On 08/05/13 12:32, Steven Haigh wrote: > >>> On 8/05/2013 6:33 PM, Roger Pau Monné wrote: > >>>> On 08/05/13 10:20, Steven Haigh wrote: > >>>>> On 30/04/2013 8:07 PM, Felipe Franciosi wrote: > >>>>>> I noticed you copied your results from "dd", but I didn''t see any conclusions drawn from experiment. > >>>>>> > >>>>>> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? > >>>>>> > >>>>>> domU: > >>>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > >>>>>> 2048+0 records in > >>>>>> 2048+0 records out > >>>>>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s > >>>>>> > >>>>>> dom0: > >>>>>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > >>>>>> 2048+0 records in > >>>>>> 2048+0 records out > >>>>>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s > >>>>>> > >>>>>> > >>>>>> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn''t perform well on Xen''s PV protocol. > >>>>> > >>>>> Just wondering if there is any further input on this... While DIRECT > >>>>> writes are as good as can be expected, NON-DIRECT writes in certain > >>>>> cases (specifically with a mdadm raid in the Dom0) are affected by > >>>>> about a 50% loss in throughput... > >>>>> > >>>>> The hard part is that this is the default mode of writing! > >>>> > >>>> As another test with indirect descriptors, could you change > >>>> xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by > >>>> default), recompile the DomU kernel and see if that helps? > >>> > >>> Ok, here we go.... compiled as 3.8.0-2 with the above change. 3.8.0-2 > >>> is running on both the Dom0 and DomU. > >>> > >>> # dd if=/dev/zero of=output.zero bs=1M count=2048 > >>> 2048+0 records in > >>> 2048+0 records out > >>> 2147483648 bytes (2.1 GB) copied, 22.1703 s, 96.9 MB/s > >>> > >>> avg-cpu: %user %nice %system %iowait %steal %idle > >>> 0.34 0.00 17.10 0.00 0.23 82.33 > >>> > >>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > >>> avgrq-sz avgqu-sz await svctm %util > >>> sdd 980.97 11936.47 53.11 429.78 4.00 48.77 > >>> 223.81 12.75 26.10 2.11 101.79 > >>> sdc 872.71 11957.87 45.98 435.67 3.55 49.30 > >>> 224.71 13.77 28.43 2.11 101.49 > >>> sde 949.26 11981.88 51.30 429.33 3.91 48.90 > >>> 225.03 21.29 43.91 2.27 109.08 > >>> sdf 915.52 11968.52 48.58 428.88 3.73 48.92 > >>> 225.84 21.44 44.68 2.27 108.56 > >>> md2 0.00 0.00 0.00 1155.61 0.00 97.51 > >>> 172.80 0.00 0.00 0.00 0.00 > >>> > >>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct > >>> 2048+0 records in > >>> 2048+0 records out > >>> 2147483648 bytes (2.1 GB) copied, 25.3708 s, 84.6 MB/s > >>> > >>> avg-cpu: %user %nice %system %iowait %steal %idle > >>> 0.11 0.00 13.92 0.00 0.22 85.75 > >>> > >>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s > >>> avgrq-sz avgqu-sz await svctm %util > >>> sdd 0.00 13986.08 0.00 263.20 0.00 55.76 > >>> 433.87 0.43 1.63 1.07 28.27 > >>> sdc 202.10 13741.55 6.52 256.57 0.81 54.77 > >>> 432.65 0.50 1.88 1.25 32.78 > >>> sde 47.96 11437.57 1.55 261.77 0.19 45.79 > >>> 357.63 0.80 3.02 1.85 48.60 > >>> sdf 2233.37 11756.13 71.93 191.38 8.99 46.80 > >>> 433.90 1.49 5.66 3.27 86.15 > >>> md2 0.00 0.00 0.00 731.93 0.00 91.49 > >>> 256.00 0.00 0.00 0.00 0.00 > >>> > >>> Now this is pretty much exactly what I would expect the system to do.... > >>> ~96MB/sec buffered, and 85MB/sec direct. > >> > >> I''m sorry to be such a PITA, but could you also try with 64? If we have to increase the maximum number of indirect descriptors I would like to set it to the lowest value that provides good performance to prevent using too much memory. > >> > >>> So - it turns out that xen_blkif_max_segments at 32 is a killer in the > >>> DomU. Now it makes me wonder what we can do about this in kernels that > >>> don''t have your series of patches against it? And also about the > >>> backend stuff in 3.8.x etc? > >> > >> There isn''t much we can do regarding kernels without indirect descriptors, there''s no easy way to increase the number of segments in a request. > >> > >> > >> _______________________________________________ > >> Xen-devel mailing list > >> Xen-devel@lists.xen.org > >> http://lists.xen.org/xen-devel > >> > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel >