Saito, Hideki via llvm-dev
2017-Oct-14 02:26 UTC
[llvm-dev] [RFC] Polly Status and Integration
>Do you recall the arguments why it was considered a bad idea?Felt like long time ago, but it turned out that was actually just a bit over a year ago. Here's the thread. http://lists.llvm.org/pipermail/llvm-dev/2016-August/104079.html Only a few explicitly responded, but I took that as silent majority was satisfied with the responses. Prior to that thread, I also pinged HPC oriented LLVM developers, and "don't modify IR until deciding to vectorize" resonated well there, too.>However, I also think that another meta-layer representing instructions increases complexity and duplicates algorithms when we could just reuse the data structures and algorithms that already exist and have matured.I agree. So, I dream about a lighter weight version of Value class hierarchy where many of the standard Value class hierarchy algorithm can run, without making lighter weight stuff hooked into the actual IR state. We intentionally tried to make VPValue/VPUser/VPInstruction interfaces "subset of" Value/User/Instruction interfaces such that the code sharing (not copying/pasting) can be possible. This is practical enough so far, but I'm still wondering, in a long run, whether we can do something better. Thus, constructive ideas are welcome. Thanks, Hideki --------- Just in case someone needs this info: For a recap of why vectorizer wants to create new instructions and new basic blocks during its analysis phase, here's the video from 2016 LLVM Dev Con. Slide 19. https://www.youtube.com/watch?v=XXAvdUwO7kQ -----Original Message----- From: meinersbur at googlemail.com [mailto:meinersbur at googlemail.com] On Behalf Of Michael Kruse Sent: Friday, October 13, 2017 5:13 PM To: Saito, Hideki <hideki.saito at intel.com> Cc: llvm-dev at lists.llvm.org; Hal Finkel <hfinkel at anl.gov> Subject: Re: [llvm-dev] [RFC] Polly Status and Integration 2017-10-14 1:29 GMT+02:00 Saito, Hideki via llvm-dev <llvm-dev at lists.llvm.org>:> I'm also sorry that I'm not commenting on the main part of your RFC in > this reply. I just want to focus on one thing here. > > Proposed Loop Optimization Framework > ------------------------------------ > Only the architecture is outlined here. The reasons for them can be > found in the "rationales" section below. > > A) Preprocessing > Make a copy of the entire function to allow transformations only for > the sake of analysis. > > Before we started our journey into VPlan-based vectorization approach, > we explicitly asked about modifying the IR for the sake of vectorization analysis ----- general consensus in LLVM-DEV was "that's a bad idea". We thought so, too. > That's the reason VPlan has its own local data structure that it can play with. > > Making a copy of the entire function is far worse than copying a loop > (nest). Unless the community mindset has dramatically changed since > ~2yrs ago, it's unlikely that you'll get overwhelming support on "copy the entire function" before the decision to transform is taken. So, if this part is optional, you may want to state that. > > Having said that, in https://reviews.llvm.org/D38676, we are > introducing concepts like VPValue, VPUser, and VPInstruction, in order > to manipulate and interact with things that didn't come from the IR of > the original loop (nest). As such, I can see why you think "a playground copy of IR" is beneficial. Even before reading your RFC, I was actually thinking that someone else may also benefit from VPValue/VPUser/VPInstruction-like concept for the analysis part of some heavy-weight transformations. > Ideally, what we wanted to do was to use LLVM's Value class hierarchy if we could generate/manipulate "abstract Instructions" > w/o making them parts of the IR state, instead of creating our own light-weight VPValue/VPUser/VPInstruction classes. > If anyone has better ideas in this area, we'd like to listen to. > Please reply either to this thread or through the code review mentioned above.Thank you for bringing in results from previous discussions. The only resource I was aware of was the EuroLLVM talk where I got the impression this was done for performance reasons. Do you recall the arguments why it was considered a bad idea? I understand that modifying the IR speculatively might not the best thing to do. However, I also think that another meta-layer representing instructions increases complexity and duplicates algorithms when we could just reuse the data structures and algorithms that already exist and have matured. Michael
Daniel Berlin via llvm-dev
2017-Oct-14 03:03 UTC
[llvm-dev] [RFC] Polly Status and Integration
FWIW: We hit a subset of this issue with MemorySSA (which subclasses value for the MemoryAccess's, etc), and it was discussed as well during PredicateInfo. NewGVN has a variant the same issue as well, where it actually creates unattached (IE not in a basic block) new instructions just so it can analyze them. IMHO, some nice way to make virtual forms over the instructions that didn't require reimplementing the tons of existing functionality that works with Value would be much appreciated. But maybe the right answer at some point is to just sit down and build out such an infrastructure. It certainly sounds like there are enough clients. On Fri, Oct 13, 2017 at 7:26 PM, Saito, Hideki via llvm-dev < llvm-dev at lists.llvm.org> wrote:> > >Do you recall the arguments why it was considered a bad idea? > > Felt like long time ago, but it turned out that was actually just a bit > over a year ago. > Here's the thread. > http://lists.llvm.org/pipermail/llvm-dev/2016-August/104079.html > Only a few explicitly responded, but I took that as silent majority was > satisfied with the responses. > Prior to that thread, I also pinged HPC oriented LLVM developers, and > "don't modify IR until deciding > to vectorize" resonated well there, too. > > >However, I also think that another meta-layer representing instructions > increases complexity and duplicates algorithms when we could just reuse the > data structures and algorithms that already exist and have matured. > > I agree. So, I dream about a lighter weight version of Value class > hierarchy where many of the standard Value class hierarchy algorithm can > run, without making lighter weight stuff hooked into the actual IR state. > We intentionally tried to make VPValue/VPUser/VPInstruction interfaces > "subset of" Value/User/Instruction interfaces such that the code sharing > (not copying/pasting) can be possible. This is practical enough > so far, but I'm still wondering, in a long run, whether we can do > something better. Thus, constructive ideas are welcome. > > Thanks, > Hideki > --------- > Just in case someone needs this info: > For a recap of why vectorizer wants to create new instructions and new > basic blocks during its analysis phase, > here's the video from 2016 LLVM Dev Con. Slide 19. > https://www.youtube.com/watch?v=XXAvdUwO7kQ > > -----Original Message----- > From: meinersbur at googlemail.com [mailto:meinersbur at googlemail.com] On > Behalf Of Michael Kruse > Sent: Friday, October 13, 2017 5:13 PM > To: Saito, Hideki <hideki.saito at intel.com> > Cc: llvm-dev at lists.llvm.org; Hal Finkel <hfinkel at anl.gov> > Subject: Re: [llvm-dev] [RFC] Polly Status and Integration > > 2017-10-14 1:29 GMT+02:00 Saito, Hideki via llvm-dev < > llvm-dev at lists.llvm.org>: > > I'm also sorry that I'm not commenting on the main part of your RFC in > > this reply. I just want to focus on one thing here. > > > > Proposed Loop Optimization Framework > > ------------------------------------ > > Only the architecture is outlined here. The > reasons for them can be > > found in the "rationales" section below. > > > > A) Preprocessing > > Make a copy of the entire function to allow > transformations only for > > the sake of analysis. > > > > Before we started our journey into VPlan-based vectorization approach, > > we explicitly asked about modifying the IR for the sake of vectorization > analysis ----- general consensus in LLVM-DEV was "that's a bad idea". We > thought so, too. > > That's the reason VPlan has its own local data structure that it can > play with. > > > > Making a copy of the entire function is far worse than copying a loop > > (nest). Unless the community mindset has dramatically changed since > > ~2yrs ago, it's unlikely that you'll get overwhelming support on "copy > the entire function" before the decision to transform is taken. So, if this > part is optional, you may want to state that. > > > > Having said that, in https://reviews.llvm.org/D38676, we are > > introducing concepts like VPValue, VPUser, and VPInstruction, in order > > to manipulate and interact with things that didn't come from the IR of > > the original loop (nest). As such, I can see why you think "a playground > copy of IR" is beneficial. Even before reading your RFC, I was actually > thinking that someone else may also benefit from > VPValue/VPUser/VPInstruction-like concept for the analysis part of some > heavy-weight transformations. > > Ideally, what we wanted to do was to use LLVM's Value class hierarchy if > we could generate/manipulate "abstract Instructions" > > w/o making them parts of the IR state, instead of creating our own > light-weight VPValue/VPUser/VPInstruction classes. > > If anyone has better ideas in this area, we'd like to listen to. > > Please reply either to this thread or through the code review mentioned > above. > > Thank you for bringing in results from previous discussions. The only > resource I was aware of was the EuroLLVM talk where I got the impression > this was done for performance reasons. Do you recall the arguments why it > was considered a bad idea? > > I understand that modifying the IR speculatively might not the best thing > to do. However, I also think that another meta-layer representing > instructions increases complexity and duplicates algorithms when we could > just reuse the data structures and algorithms that already exist and have > matured. > > Michael > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20171013/8540193d/attachment.html>
Michael Kruse via llvm-dev
2017-Oct-14 21:54 UTC
[llvm-dev] [RFC] Polly Status and Integration
2017-10-14 5:03 GMT+02:00 Daniel Berlin <dberlin at dberlin.org>:> FWIW: We hit a subset of this issue with MemorySSA (which subclasses value > for the MemoryAccess's, etc), and it was discussed as well during > PredicateInfo. > > NewGVN has a variant the same issue as well, where it actually creates > unattached (IE not in a basic block) new instructions just so it can analyze > them. > > IMHO, some nice way to make virtual forms over the instructions that didn't > require reimplementing the tons of existing functionality that works with > Value would be much appreciated. > > But maybe the right answer at some point is to just sit down and build out > such an infrastructure. It certainly sounds like there are enough clients.What would be different in such an infrastructure? IMHO the llvm::Value hierarchy is already relatively thin, e.g. removing/inserting an instruction just requires updating a linked list. Michael