Amara Emerson via llvm-dev
2019-Mar-13 18:45 UTC
[llvm-dev] Scalable Vector Types in IR - Next Steps?
Disclaimer: I’m only speaking for myself, not Apple. This is really disappointing. Resorting to multi-versioned fixed length vectorization isn’t a solution that’s competitive with the native VLA support, so it doesn’t look like a credible alternative suggestion (at least not without elaborating it on the mailing list). Without a practical alternative, it’s essentially saying “no” to a whole class of vector architectures of which SVE is only one. Amara> On Mar 13, 2019, at 9:04 AM, Graham Hunter via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hi Renato, > >> It goes without saying that those discussions should have been had in >> the mailing list, not behind closed doors. > > I have encouraged people to respond on the list or the RFC many times, > but I've not had much luck in getting people to post even if they > approve of the idea. > >> Agreeing to implementations >> in private is asking to get bad reviews in public, as the SVE process >> has shown *over and over again*. > > There isn't an agreement on the implementation yet; I have posted two > possibilities and am trying to get consensus on an approach from the > community. > >>> The basic argument was that they didn't believe the value gained from enabling VLA autovectorization was worth the added complexity in maintaining the codebase. They were open to changing their minds if we could demonstrate sufficient demand for the feature. >> >> In that case, the current patches to change the IR should be >> abandoned, as well as reverting the previous change to the types, so >> that we don't carry any unnecessary code forward. > > There's no consensus on supporting the opaque types either yet. Even > if we do end up going down that route, it could be modified -- as I > mentioned in my notes, I could introduce a single toplevel type to > the IR if I stored additional data in it (making it effectively the > same as the current VectorType, just opaque to existing optimization > passes), and then would be able to lower directly to the existing > scalable MVTs we have. > > >> The review you sent seems to be a mechanical change to include the >> intrinsics, but the target lowering change seems to be too small to >> actually be able to lower anything. > > The new patches are just meant to demonstrate the basics of the opaque > type to see if there's greater consensus in exploring this approach > instead of the VLA approach. > >> Without context, it's hard to know what's going on. > > The current state is just what you stated in your initial email in this > chain; we have a solution that seems to work (in principal) for SVE, RVV, > and SX-Aurora, but not enough people that care about VLA vectorization > beyond those groups. > > Given the time constraints, Arm is being pushed to consider a plan B to > get something working in time for early 2020. > > -Graham > > > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Finkel, Hal J. via llvm-dev
2019-Mar-13 18:54 UTC
[llvm-dev] Scalable Vector Types in IR - Next Steps?
On 3/13/19 1:45 PM, Amara Emerson via llvm-dev wrote:> Disclaimer: I’m only speaking for myself, not Apple. > > This is really disappointing. Resorting to multi-versioned fixed length vectorization isn’t a solution that’s competitive with the native VLA support, so it doesn’t look like a credible alternative suggestion (at least not without elaborating it on the mailing list). Without a practical alternative, it’s essentially saying “no” to a whole class of vector architectures of which SVE is only one.To the extent that this alternative direction represents an exploration so that we can all evaluate in a more-informed manner, I think that is valuable. However, let me agree with Amara, I prefer the original approach. Among many other advantages, users will expect the compiler to perform arithmetic optimizations on VLA operations (e.g., InstCombines), and if we can't reuse the existing logic for this purpose, we'll end up with an inferior result. Thanks again, Hal> > Amara > >> On Mar 13, 2019, at 9:04 AM, Graham Hunter via llvm-dev <llvm-dev at lists.llvm.org> wrote: >> >> Hi Renato, >> >>> It goes without saying that those discussions should have been had in >>> the mailing list, not behind closed doors. >> I have encouraged people to respond on the list or the RFC many times, >> but I've not had much luck in getting people to post even if they >> approve of the idea. >> >>> Agreeing to implementations >>> in private is asking to get bad reviews in public, as the SVE process >>> has shown *over and over again*. >> There isn't an agreement on the implementation yet; I have posted two >> possibilities and am trying to get consensus on an approach from the >> community. >> >>>> The basic argument was that they didn't believe the value gained from enabling VLA autovectorization was worth the added complexity in maintaining the codebase. They were open to changing their minds if we could demonstrate sufficient demand for the feature. >>> In that case, the current patches to change the IR should be >>> abandoned, as well as reverting the previous change to the types, so >>> that we don't carry any unnecessary code forward. >> There's no consensus on supporting the opaque types either yet. Even >> if we do end up going down that route, it could be modified -- as I >> mentioned in my notes, I could introduce a single toplevel type to >> the IR if I stored additional data in it (making it effectively the >> same as the current VectorType, just opaque to existing optimization >> passes), and then would be able to lower directly to the existing >> scalable MVTs we have. >> >> >>> The review you sent seems to be a mechanical change to include the >>> intrinsics, but the target lowering change seems to be too small to >>> actually be able to lower anything. >> The new patches are just meant to demonstrate the basics of the opaque >> type to see if there's greater consensus in exploring this approach >> instead of the VLA approach. >> >>> Without context, it's hard to know what's going on. >> The current state is just what you stated in your initial email in this >> chain; we have a solution that seems to work (in principal) for SVE, RVV, >> and SX-Aurora, but not enough people that care about VLA vectorization >> beyond those groups. >> >> Given the time constraints, Arm is being pushed to consider a plan B to >> get something working in time for early 2020. >> >> -Graham >> >> >> >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-- Hal Finkel Lead, Compiler Technology and Programming Languages Leadership Computing Facility Argonne National Laboratory
Renato Golin via llvm-dev
2019-Mar-13 23:02 UTC
[llvm-dev] Scalable Vector Types in IR - Next Steps?
Agreed with both! Furthermore, any temporary solution will have to be very similar to what we expect to see natively, or the transition to native may never happen. On Wed, 13 Mar 2019, 18:55 Finkel, Hal J., <hfinkel at anl.gov> wrote:> On 3/13/19 1:45 PM, Amara Emerson via llvm-dev wrote: > > Disclaimer: I’m only speaking for myself, not Apple. > > > > This is really disappointing. Resorting to multi-versioned fixed length > vectorization isn’t a solution that’s competitive with the native VLA > support, so it doesn’t look like a credible alternative suggestion (at > least not without elaborating it on the mailing list). Without a practical > alternative, it’s essentially saying “no” to a whole class of vector > architectures of which SVE is only one. > > > To the extent that this alternative direction represents an exploration > so that we can all evaluate in a more-informed manner, I think that is > valuable. However, let me agree with Amara, I prefer the original > approach. Among many other advantages, users will expect the compiler to > perform arithmetic optimizations on VLA operations (e.g., InstCombines), > and if we can't reuse the existing logic for this purpose, we'll end up > with an inferior result. > > Thanks again, > > Hal > > > > > > Amara > > > >> On Mar 13, 2019, at 9:04 AM, Graham Hunter via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> > >> Hi Renato, > >> > >>> It goes without saying that those discussions should have been had in > >>> the mailing list, not behind closed doors. > >> I have encouraged people to respond on the list or the RFC many times, > >> but I've not had much luck in getting people to post even if they > >> approve of the idea. > >> > >>> Agreeing to implementations > >>> in private is asking to get bad reviews in public, as the SVE process > >>> has shown *over and over again*. > >> There isn't an agreement on the implementation yet; I have posted two > >> possibilities and am trying to get consensus on an approach from the > >> community. > >> > >>>> The basic argument was that they didn't believe the value gained from > enabling VLA autovectorization was worth the added complexity in > maintaining the codebase. They were open to changing their minds if we > could demonstrate sufficient demand for the feature. > >>> In that case, the current patches to change the IR should be > >>> abandoned, as well as reverting the previous change to the types, so > >>> that we don't carry any unnecessary code forward. > >> There's no consensus on supporting the opaque types either yet. Even > >> if we do end up going down that route, it could be modified -- as I > >> mentioned in my notes, I could introduce a single toplevel type to > >> the IR if I stored additional data in it (making it effectively the > >> same as the current VectorType, just opaque to existing optimization > >> passes), and then would be able to lower directly to the existing > >> scalable MVTs we have. > >> > >> > >>> The review you sent seems to be a mechanical change to include the > >>> intrinsics, but the target lowering change seems to be too small to > >>> actually be able to lower anything. > >> The new patches are just meant to demonstrate the basics of the opaque > >> type to see if there's greater consensus in exploring this approach > >> instead of the VLA approach. > >> > >>> Without context, it's hard to know what's going on. > >> The current state is just what you stated in your initial email in this > >> chain; we have a solution that seems to work (in principal) for SVE, > RVV, > >> and SX-Aurora, but not enough people that care about VLA vectorization > >> beyond those groups. > >> > >> Given the time constraints, Arm is being pushed to consider a plan B to > >> get something working in time for early 2020. > >> > >> -Graham > >> > >> > >> > >> > >> _______________________________________________ > >> LLVM Developers mailing list > >> llvm-dev at lists.llvm.org > >> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > > _______________________________________________ > > LLVM Developers mailing list > > llvm-dev at lists.llvm.org > > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > > -- > Hal Finkel > Lead, Compiler Technology and Programming Languages > Leadership Computing Facility > Argonne National Laboratory > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190313/e7dcbd8d/attachment.html>
Possibly Parallel Threads
- Scalable Vector Types in IR - Next Steps?
- Scalable Vector Types in IR - Next Steps?
- Scalable Vector Types in IR - Next Steps?
- [EXT] Re: [RFC][SVE] Supporting SIMD instruction sets with variable vector lengths
- [EXT] Re: [RFC][SVE] Supporting SIMD instruction sets with variable vector lengths