Charlie Turner via llvm-dev
2015-Nov-09 17:55 UTC
[llvm-dev] [RFC][SLP] Let's turn -slp-vectorize-hor on by default
I have not. I could feasibly do this, but I'm not set up to perform good experiments on X86-64 hardware. Furthermore, if I do it for X86-64, it only seems fair I should do it for the other backends as well, which is much less feasible for me. I'm reaching out the community to see if there's any objection based on their own measurements of this feature about defaulting it to on. Please let me know if you think I've got the wrong end of the etiquette stick here, and if so I'll try and acquire sensible numbers for other backends. Kind regards, Charlie. On 9 November 2015 at 17:50, Das, Dibyendu <Dibyendu.Das at amd.com> wrote:> Have you run cpu2006 for x86-64 for perf progression/regression ? > > Sent from my Windows Phone > ________________________________ > From: Charlie Turner via llvm-dev > Sent: 11/9/2015 11:15 PM > To: llvm-dev at lists.llvm.org > Subject: [llvm-dev] [RFC][SLP] Let's turn -slp-vectorize-hor on by default > > I've done compile-time experiments for AArch64 over SPEC{2000,2006} > and of course the test-suite. I measure no significant compile-time > impact of enabling this feature by default. > > I also ran the test-suite on an X86-64 machine. I can't imagine any > other targets being uniquely effected in terms of compile-time by > turning this on after testing both AArch64 and X86-64. I also timed > running the regression tests with -slp-vectorize-hor enabled and > disabled, no significant difference here either. > > There are no significant performance regressions (or much > improvements) on AArch64 in night-test suite. I do see wins in third > party benchmarks when using this flag, which is why I'm asking if > there would be any objection from the community to making > -slp-vectorize-hor default on. > > I have run the regression tests and looked through the bug tracker / > VC logs, I can't see any reason for not enabling it. > > Thanks, > Charlie. > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Nadav Rotem via llvm-dev
2015-Nov-09 22:03 UTC
[llvm-dev] [RFC][SLP] Let's turn -slp-vectorize-hor on by default
> On Nov 9, 2015, at 9:55 AM, Charlie Turner via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > I have not. I could feasibly do this, but I'm not set up to perform > good experiments on X86-64 hardware. Furthermore, if I do it for > X86-64, it only seems fair I should do it for the other backends as > well, which is much less feasible for me. I'm reaching out the > community to see if there's any objection based on their own > measurements of this feature about defaulting it to on. > > Please let me know if you think I've got the wrong end of the > etiquette stick here, and if so I'll try and acquire sensible numbers > for other backends. > > Kind regards, > Charlie. > > On 9 November 2015 at 17:50, Das, Dibyendu <Dibyendu.Das at amd.com> wrote: >> Have you run cpu2006 for x86-64 for perf progression/regression ?I think it would be great if you could help Charlie with this.>> >> Sent from my Windows Phone >> ________________________________ >> From: Charlie Turner via llvm-dev >> Sent: 11/9/2015 11:15 PM >> To: llvm-dev at lists.llvm.org >> Subject: [llvm-dev] [RFC][SLP] Let's turn -slp-vectorize-hor on by default >> >> I've done compile-time experiments for AArch64 over SPEC{2000,2006} >> and of course the test-suite. I measure no significant compile-time >> impact of enabling this feature by default. >> >> I also ran the test-suite on an X86-64 machine. I can't imagine any >> other targets being uniquely effected in terms of compile-time by >> turning this on after testing both AArch64 and X86-64. I also timed >> running the regression tests with -slp-vectorize-hor enabled and >> disabled, no significant difference here either. >> >> There are no significant performance regressions (or much >> improvements) on AArch64 in night-test suite. I do see wins in third >> party benchmarks when using this flag, which is why I'm asking if >> there would be any objection from the community to making >> -slp-vectorize-hor default on. >> >> I have run the regression tests and looked through the bug tracker / >> VC logs, I can't see any reason for not enabling it.+1 If there are no compile time and runtime regressions and if we are seeing wins in some benchmarks then we should enable this by default. At some point we should demote this flag from a command-line flag into a static variable in the code. Out of curiosity, how much of the compile time are we spending in the SLP vectorizer nowadays ?>> >> Thanks, >> Charlie. >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Renato Golin via llvm-dev
2015-Nov-10 09:58 UTC
[llvm-dev] [RFC][SLP] Let's turn -slp-vectorize-hor on by default
On 9 November 2015 at 22:03, Nadav Rotem via llvm-dev <llvm-dev at lists.llvm.org> wrote:> If there are no compile time and runtime regressions and if we are seeing wins in some benchmarks then we should enable this by default. At some point we should demote this flag from a command-line flag into a static variable in the code.+1 cheers, --renato
Das, Dibyendu via llvm-dev
2015-Nov-10 10:39 UTC
[llvm-dev] [RFC][SLP] Let's turn -slp-vectorize-hor on by default
I will try to get some spec cpu 2006 rate runs done under -O3 -flto with and without -slp-vectorize-hor and let you know. -Thx -----Original Message----- From: nrotem at apple.com [mailto:nrotem at apple.com] Sent: Tuesday, November 10, 2015 3:33 AM To: Charlie Turner Cc: Das, Dibyendu; llvm-dev at lists.llvm.org Subject: Re: [llvm-dev] [RFC][SLP] Let's turn -slp-vectorize-hor on by default> On Nov 9, 2015, at 9:55 AM, Charlie Turner via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > I have not. I could feasibly do this, but I'm not set up to perform > good experiments on X86-64 hardware. Furthermore, if I do it for > X86-64, it only seems fair I should do it for the other backends as > well, which is much less feasible for me. I'm reaching out the > community to see if there's any objection based on their own > measurements of this feature about defaulting it to on. > > Please let me know if you think I've got the wrong end of the > etiquette stick here, and if so I'll try and acquire sensible numbers > for other backends. > > Kind regards, > Charlie. > > On 9 November 2015 at 17:50, Das, Dibyendu <Dibyendu.Das at amd.com> wrote: >> Have you run cpu2006 for x86-64 for perf progression/regression ?I think it would be great if you could help Charlie with this.>> >> Sent from my Windows Phone >> ________________________________ >> From: Charlie Turner via llvm-dev >> Sent: 11/9/2015 11:15 PM >> To: llvm-dev at lists.llvm.org >> Subject: [llvm-dev] [RFC][SLP] Let's turn -slp-vectorize-hor on by >> default >> >> I've done compile-time experiments for AArch64 over SPEC{2000,2006} >> and of course the test-suite. I measure no significant compile-time >> impact of enabling this feature by default. >> >> I also ran the test-suite on an X86-64 machine. I can't imagine any >> other targets being uniquely effected in terms of compile-time by >> turning this on after testing both AArch64 and X86-64. I also timed >> running the regression tests with -slp-vectorize-hor enabled and >> disabled, no significant difference here either. >> >> There are no significant performance regressions (or much >> improvements) on AArch64 in night-test suite. I do see wins in third >> party benchmarks when using this flag, which is why I'm asking if >> there would be any objection from the community to making >> -slp-vectorize-hor default on. >> >> I have run the regression tests and looked through the bug tracker / >> VC logs, I can't see any reason for not enabling it.+1 If there are no compile time and runtime regressions and if we are seeing wins in some benchmarks then we should enable this by default. At some point we should demote this flag from a command-line flag into a static variable in the code. Out of curiosity, how much of the compile time are we spending in the SLP vectorizer nowadays ?>> >> Thanks, >> Charlie. >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Maybe Matching Threads
- [RFC][SLP] Let's turn -slp-vectorize-hor on by default
- [RFC][SLP] Let's turn -slp-vectorize-hor on by default
- [RFC][SLP] Let's turn -slp-vectorize-hor on by default
- [LLVMdev] [Vectorization] Mis match in code generated
- [LLVMdev] [Vectorization] Mis match in code generated