Quentin Colombet via llvm-dev
2017-May-31 15:07 UTC
[llvm-dev] [GlobalISel][AArch64] Toward flipping the switch for O0: Please give it a try!
Hi Kristof, Thanks for the updated numbers.> On May 31, 2017, at 7:45 AM, Kristof Beyls <kristof.beyls at arm.com> wrote: > >> >> On 31 May 2017, at 15:33, Diana Picus via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> >> Cool test :) >> It seems to work fine now, I don't see any new failures. IIUC, Kristof is also giving it another run. >> >> Cheers, >> Diana > > Latest comparisons on my side, after picking up r304244, i.e. the correct Localizer pass. > * CTMark compile time, comparing "-O0 -g" vs '-O0 -g -mllvm -global-isel=true -mllvm -global-isel-abort=0': about 6% increase with globalisel. This was about 3.5% before the Localizer pass landed.That one is surprising too. I wouldn’t have expected this pass to show up in the compile time profile. At least not to this extend. What is the biggest offender?> * My usual performance benchmarking run: 8.5% slow-down. This was about 9.5% before the Localizer pass landed, so a slight improvement. > * Code size: 3.14% larger. This was about 2.8% before the Localizer pass landed, so a slight regression.That one is surprising. Do you have an idea of what is happening? Alternatively if you can point me to the biggest offender, I can have a look. The only thing I can think of is that we duplicate constants that are expensive to materialize. If that’s the case, we were discussing with Ahmed an alternative to the localizer pass that would operate during InstructionSelect so may be worth pursuing.> * Debug info quality: I didn't do another recheck, trusting that the Localizer pass wouldn't change debug info quality. > * Stack size usage: I don't know of a good way to measure this, but Diana's experiments show that at least for bootstrapping it went from "problematically bad" to "OK". > > Thanks,Thanks, -Quentin> > Kristof > > >> >> On 30 May 2017 at 22:57, Quentin Colombet <qcolombet at apple.com <mailto:qcolombet at apple.com>> wrote: >> Hi Diana, >> >> I’ve actually gone ahead and pushed the fix as I was able to produce a small reproducer. >> >> This is r304244 >> >> Let me know if you encounter any other problem. >> >> Cheers, >> -Quentin >>> On May 30, 2017, at 7:42 AM, Quentin Colombet via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >>> >>> Thanks Diana. >>> >>> That is indeed the assumption in the code and this is obviously wrong. >>> >>> Could you try the attached patch? >>> >>> (I haven’t even tried to compile it though) >>> >>> Cheers, >>> -Quentin >>> <localizer_tentative_fix.diff> >>>> On May 30, 2017, at 6:56 AM, Diana Picus <diana.picus at linaro.org <mailto:diana.picus at linaro.org>> wrote: >>>> >>>> Hi Quentin, >>>> >>>> I've attached a reproducer for the problem. >>>> >>>> I've described what I think the problem is in the file, but the short >>>> version is that the localizer shouldn't assume that the iteration >>>> order for the uses corresponds to the logical order of instructions in >>>> a basic block (we're now localizing before the first use that we find, >>>> but that may be later in the basic block, so we'd end up with uses >>>> before the def). >>>> >>>> I'm not sure it's possible to test this without running a couple of >>>> passes. You might be able to trigger it only with reg bank select + >>>> localize, but I haven't tried. Using only the localizer would mean >>>> that the iteration order for the uses would be the order in which >>>> they're read in, so you wouldn't have this problem. >>>> >>>> Hope that helps, >>>> Diana >>>> >>>> >>>> On 29 May 2017 at 10:06, Diana Picus <diana.picus at linaro.org <mailto:diana.picus at linaro.org>> wrote: >>>>> Thanks Quentin, it's in progress now, I'll let you know how it goes. >>>>> >>>>> Cheers, >>>>> Diana >>>>> >>>>> On 27 May 2017 at 03:36, Quentin Colombet <qcolombet at apple.com <mailto:qcolombet at apple.com>> wrote: >>>>>> Hi Kristof, >>>>>> >>>>>> I’ve pushed the localizer in r304051 and added it in the AArch64 O0 pipeline >>>>>> in r304052. >>>>>> >>>>>> I let Diana investigate the seg fault she was seeing. >>>>>> >>>>>> @Diana, let me know if you need help. >>>>>> >>>>>> Cheers, >>>>>> -Quentin >>>>>> >>>>>> On May 25, 2017, at 1:53 PM, Quentin Colombet via llvm-dev >>>>>> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >>>>>> >>>>>> Hi Kristof, >>>>>> >>>>>> On May 25, 2017, at 2:09 AM, Kristof Beyls <kristof.beyls at arm.com <mailto:kristof.beyls at arm.com>> wrote: >>>>>> >>>>>> >>>>>> On 24 May 2017, at 22:01, Quentin Colombet <qcolombet at apple.com <mailto:qcolombet at apple.com>> wrote: >>>>>> >>>>>> Hi Kristof, >>>>>> >>>>>> Thanks for going back so fast! >>>>>> >>>>>> On May 24, 2017, at 12:57 PM, Kristof Beyls <kristof.beyls at arm.com <mailto:kristof.beyls at arm.com>> wrote: >>>>>> >>>>>> >>>>>> On 24 May 2017, at 19:31, Quentin Colombet <qcolombet at apple.com <mailto:qcolombet at apple.com>> wrote: >>>>>> >>>>>> Hi Kristof, >>>>>> >>>>>> Thanks for the measurements. >>>>>> >>>>>> On May 24, 2017, at 6:00 AM, Kristof Beyls <kristof.beyls at arm.com <mailto:kristof.beyls at arm.com>> wrote: >>>>>> >>>>>> >>>>>> - Comparing against -O0 without globalisel but with the above regalloc >>>>>> options: 5.6% performance drop, 1% code size drop. >>>>>> >>>>>> In summary, the measurements indicate some good improvements. >>>>>> I also haven't measure the impact on compile time. >>>>>> >>>>>> >>>>>> Do you have a mean to make this measurement? >>>>>> Ahmed did a bunch of compile time measurements on our side and I wanted to >>>>>> see if I need to put him on the hook again :). >>>>>> >>>>>> >>>>>> I did a quick setup with CTMark (part of the test-suite). I ran each of >>>>>> * '-O0 -g', >>>>>> * '-O0 -g -mllvm -global-isel=true -mllvm -global-isel-abort=0', and >>>>>> * '-O0 -g -mllvm -global-isel=true -mllvm -global-isel-abort=0 -mllvm >>>>>> -optimize-regalloc -mllvm -regalloc=greedy' >>>>>> 5 times, cross-compiling from X86 to AArch64, and took the median measured >>>>>> compile times. >>>>>> In summary, I see GlobalISel having a compile time that's 3.5% higher than >>>>>> the current -O0 default. >>>>>> With enabling the greedy register allocator, this increases to 28%. >>>>>> 28% is probably too high? >>>>>> >>>>>> >>>>>> I think it is yes. >>>>>> I have attached a quick hack to the greedy allocator to feature a fast mode. >>>>>> Could you give it a try? >>>>>> >>>>>> To enable the fast mode, please use (-mllvm) -regalloc-greedy-fast=true >>>>>> (default is false). >>>>>> >>>>>> >>>>>> I'm afraid it doesn't seem to save much compile time. On geomean, I see >>>>>> about 26% compile time increase against the current -O0 default (compared to >>>>>> 28% increase for regalloc greedy without your patch). >>>>>> >>>>>> >>>>>> Interesting, I guess a lot of time is spent in the coalescer. Could you give >>>>>> a try with -join-liveintervals=false? >>>>>> >>>>>> >>>>>> With adding -join-liveintervals=false, I see the compile time increase going >>>>>> up to 28% again. >>>>>> >>>>>> >>>>>> Heh, I am mildly surprised we hand much more live-ranges to the allocator >>>>>> when we do that. >>>>>> >>>>>> >>>>>> >>>>>> Do you know where the time is spent (-time-passes)? >>>>>> >>>>>> >>>>>> I'm afraid I won't have time to have a closer look in the next couple of >>>>>> days - I don't know where the time is spent at the moment. >>>>>> >>>>>> >>>>>> Fair enough, will investigate later. >>>>>> >>>>>> >>>>>> >>>>>> Anyhow, fixing all of those, although this is I think the right approach, >>>>>> will take time, so we can go with the localizer. >>>>>> >>>>>> >>>>>> Right, I don't understand the register allocator well enough to know if that >>>>>> compile time overhead can be fixed, while still getting the necessary >>>>>> codegen benefits the greedy allocator gives. >>>>>> Is there any specific help you're looking for with getting the localizer >>>>>> work well enough for production use? >>>>>> >>>>>> >>>>>> I’ll clean-up the WIP patch for the localizer, then you guys can fix the bug >>>>>> that you found. >>>>>> >>>>>> I’ll do that tomorrow. >>>>>> >>>>>> Cheers, >>>>>> -Quentin >>>>>> >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Kristof >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> LLVM Developers mailing list >>>>>> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> >>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev> >>>>>> >>>>>> >>>> <localizer-mo-order.mir> >>> >>> _______________________________________________ >>> LLVM Developers mailing list >>> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> >>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev> >> >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170531/9f45d7d9/attachment.html>
Kristof Beyls via llvm-dev
2017-Jun-01 14:46 UTC
[llvm-dev] [GlobalISel][AArch64] Toward flipping the switch for O0: Please give it a try!
On 31 May 2017, at 17:07, Quentin Colombet <qcolombet at apple.com<mailto:qcolombet at apple.com>> wrote: Latest comparisons on my side, after picking up r304244, i.e. the correct Localizer pass. * CTMark compile time, comparing "-O0 -g" vs '-O0 -g -mllvm -global-isel=true -mllvm -global-isel-abort=0': about 6% increase with globalisel. This was about 3.5% before the Localizer pass landed. That one is surprising too. I wouldn’t have expected this pass to show up in the compile time profile. At least not to this extend. What is the biggest offender? Hmmm. So I took the 3.5% compile time overhead from my last measurement before the localizer landed, from around 24th of May. When using -ftime-report, I see the Localizer pass typically taking very roughly about 1% of compile time. Maybe another part of GlobalISel became a bit slower since I did that 3.5% measurement? Or maybe the Localizer pass changes the structure of the program so that another later pass gets a different compile time profile? Basically, I'd have to do more experiments to figure that one out. As far as where time is spent in the gisel-passes itself, on average, I saw the following on the latest CTMark experiment I ran: Avg compile time spent in IRTranslator: 4.61% Avg compile time spent in InstructionSelect: 7.51% Avg compile time spent in Legalizer: 1.06% Avg compile time spent in Localizer: 0.76% Avg compile time spent in RegBankSelect: 2.12% * My usual performance benchmarking run: 8.5% slow-down. This was about 9.5% before the Localizer pass landed, so a slight improvement. * Code size: 3.14% larger. This was about 2.8% before the Localizer pass landed, so a slight regression. That one is surprising. Do you have an idea of what is happening? Alternatively if you can point me to the biggest offender, I can have a look. So the biggest offenders on the mem_bytes metric in LNT are: O0 -g O0 -g gisel-with-localizer O0 -g gisel-without-localizer SingleSource/Benchmarks/Misc/perlin 14272 14640 18344 25.95% SingleSource/Benchmarks/Dhrystone/dry 16560 17144 20160 18.21% SingleSource/Benchmarks/Stanford/QueensProfile 13912 14192 15136 6.79% MultiSource/Benchmarks/Trimaran/netbench-url/netbench-url 71400 72272 75504 4.53% I haven't had time to investigate what exact changes make the code size go up that much with the localizer pass in those cases... The only thing I can think of is that we duplicate constants that are expensive to materialize. If that’s the case, we were discussing with Ahmed an alternative to the localizer pass that would operate during InstructionSelect so may be worth pursuing. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170601/2e115051/attachment.html>
Quentin Colombet via llvm-dev
2017-Jun-06 17:11 UTC
[llvm-dev] [GlobalISel][AArch64] Toward flipping the switch for O0: Please give it a try!
Thanks Kristof. Sounds like we'll need to investigate though I'd say it is not blocking the switch. At this point I think everybody is on board to flip the switch. @Eric, how does that sound to you? Thanks, Q> Le 1 juin 2017 à 07:46, Kristof Beyls <Kristof.Beyls at arm.com> a écrit : > > >> On 31 May 2017, at 17:07, Quentin Colombet <qcolombet at apple.com> wrote: >>> >>> Latest comparisons on my side, after picking up r304244, i.e. the correct Localizer pass. >>> * CTMark compile time, comparing "-O0 -g" vs '-O0 -g -mllvm -global-isel=true -mllvm -global-isel-abort=0': about 6% increase with globalisel. This was about 3.5% before the Localizer pass landed. >> >> That one is surprising too. I wouldn’t have expected this pass to show up in the compile time profile. At least not to this extend. >> What is the biggest offender? > > Hmmm. So I took the 3.5% compile time overhead from my last measurement before the localizer landed, from around 24th of May. > When using -ftime-report, I see the Localizer pass typically taking very roughly about 1% of compile time. > Maybe another part of GlobalISel became a bit slower since I did that 3.5% measurement? > Or maybe the Localizer pass changes the structure of the program so that another later pass gets a different compile time profile? > Basically, I'd have to do more experiments to figure that one out. > > As far as where time is spent in the gisel-passes itself, on average, I saw the following on the latest CTMark experiment I ran: > Avg compile time spent in IRTranslator: 4.61% > Avg compile time spent in InstructionSelect: 7.51% > Avg compile time spent in Legalizer: 1.06% > Avg compile time spent in Localizer: 0.76% > Avg compile time spent in RegBankSelect: 2.12% > >> >>> * My usual performance benchmarking run: 8.5% slow-down. This was about 9.5% before the Localizer pass landed, so a slight improvement. >>> * Code size: 3.14% larger. This was about 2.8% before the Localizer pass landed, so a slight regression. >> >> That one is surprising. Do you have an idea of what is happening? >> Alternatively if you can point me to the biggest offender, I can have a look. > > So the biggest offenders on the mem_bytes metric in LNT are: > O0 -g O0 -g gisel-with-localizer O0 -g gisel-without-localizer > SingleSource/Benchmarks/Misc/perlin 14272 14640 18344 25.95% > SingleSource/Benchmarks/Dhrystone/dry 16560 17144 20160 18.21% > SingleSource/Benchmarks/Stanford/QueensProfile 13912 14192 15136 6.79% > MultiSource/Benchmarks/Trimaran/netbench-url/netbench-url 71400 72272 75504 4.53% > > I haven't had time to investigate what exact changes make the code size go up that much with the localizer pass in those cases... > >> >> The only thing I can think of is that we duplicate constants that are expensive to materialize. If that’s the case, we were discussing with Ahmed an alternative to the localizer pass that would operate during InstructionSelect so may be worth pursuing. >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170606/190bf295/attachment.html>
Possibly Parallel Threads
- [GlobalISel][AArch64] Toward flipping the switch for O0: Please give it a try!
- [GlobalISel][AArch64] Toward flipping the switch for O0: Please give it a try!
- [GlobalISel][AArch64] Toward flipping the switch for O0: Please give it a try!
- [GlobalISel][AArch64] Toward flipping the switch for O0: Please give it a try!
- [GlobalISel][AArch64] Toward flipping the switch for O0: Please give it a try!