Hal Finkel via llvm-dev
2016-Apr-18 22:57 UTC
[llvm-dev] Move InlineCost.cpp out of Analysis?
----- Original Message -----> From: "Xinliang David Li" <davidxl at google.com> > To: "Chandler Carruth" <chandlerc at gmail.com> > Cc: "Hal Finkel" <hfinkel at anl.gov>, "via llvm-dev" > <llvm-dev at lists.llvm.org>, "Mehdi Amini" <mehdi.amini at apple.com> > Sent: Monday, April 18, 2016 5:45:21 PM > Subject: Re: [llvm-dev] Move InlineCost.cpp out of Analysis?> On Mon, Apr 18, 2016 at 3:00 PM, Chandler Carruth < > chandlerc at gmail.com > wrote:> > On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel < hfinkel at anl.gov > > > wrote: >> > > > From: "Xinliang David Li" < davidxl at google.com > > > > > > >> > > > On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini < > > > > mehdi.amini at apple.com > > > > > wrote: > > > > > > > > > > > In the current case at stake: the issue is that we can't make > > > > > the > > > > > Analysis library using anything from the ProfileData library. > > > > > Conceptually there is a problem IMO. > > > > > > > > > > > > > > Yes -- this is a very good point. > > > > > > > > > Independent of anything else, +1. > > >> > The design of ProfileData and reading profile information in the > > entire middle end had a really fundamental invariant that folks > > seem > > to have lost track of: > > Not sure about what you mean by 'lost track of'. > > a) There is exactly *one* way to get at profile information from > > general analyses and transforms: a dedicated analysis pass that > > manages access to the profile info. > > This is not the case as of today. BPI is a dedicated analysis pass to > manage branch probability profile information, but this pass is only > used in limited situations (e.g, for BFI, profile update in > jump-threading etc) -- using it it requires more memory as well as > incremental update interfaces. Many transformation passes simply > skip it and directly access the meta data in IR.Really? Which ones? I see a number of passes that know about profiling metadata so they can preserve it, or transfer it across restructuring, but nothing that really interprets it on its own in a non-trivial way. I'm not sure this is desirable regardless.> > b) There is exactly *one* way for this analysis to compute this > > information from an *external* profile source: profile metadata > > attached to the IR. > > This is the case already -- all profile data are annotated to the IR > via analysis pass (or in FE based instrumentation case, by FE during > llvm code gen).> > c) There could be many external profile sources, but all of them > > should be read and then translated into metadata annotations on the > > IR so that serialization / deserialization preserve them in a > > common > > format and we can reason about how they work. >> This should be the case already -- for instance sample and > instrumentation based IR share the same annotation for branch > probability, entry count and profile summary.> > This layering is why it is only a transform that accesses > > ProfileData > > -- it is responsible for annotating the IR and nothing else. Then > > the analysis uses these annotations and never reads the data > > directly. >> > I think this is a really important separation of concerns as it > > ensures that we don't get an explosion of different analyses > > supporting various different subsets of profile sources. >> > Now, the original design only accounted for profile information > > *within* a function body, clearly it needs to be extended to > > support > > intraprocedural information. > > Not sure what you mean. Profile data in general does not extend to > IPA (we will reopen discussion on that soon), but profile summary is > 'invariant'/readonly data, which should be available to IPA already.IPA-level profiling data might be invariant, but inside the function it certainly need to change because the code inside functions is changed (branches are eliminated, transformed into selects, etc.) -Hal> David> > But I would still expect that to follow a similar layering where we > > first read the data into IR annotations, then have an analysis pass > > (this time a module analysis pass in all likelihood) that brokers > > access to these annotations through an API that can do intelligent > > things like synthesizing it from the "cold" attribute or whatever > > when missing. > > > -Chandler >-- Hal Finkel Assistant Computational Scientist Leadership Computing Facility Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/7586971a/attachment.html>
Xinliang David Li via llvm-dev
2016-Apr-18 23:30 UTC
[llvm-dev] Move InlineCost.cpp out of Analysis?
On Mon, Apr 18, 2016 at 3:57 PM, Hal Finkel <hfinkel at anl.gov> wrote:> > ------------------------------ > > *From: *"Xinliang David Li" <davidxl at google.com> > *To: *"Chandler Carruth" <chandlerc at gmail.com> > *Cc: *"Hal Finkel" <hfinkel at anl.gov>, "via llvm-dev" < > llvm-dev at lists.llvm.org>, "Mehdi Amini" <mehdi.amini at apple.com> > *Sent: *Monday, April 18, 2016 5:45:21 PM > *Subject: *Re: [llvm-dev] Move InlineCost.cpp out of Analysis? > > > > On Mon, Apr 18, 2016 at 3:00 PM, Chandler Carruth <chandlerc at gmail.com> > wrote: > >> On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel <hfinkel at anl.gov> wrote: >> >>> >>> >>> ------------------------------ >>> >>> *From: *"Xinliang David Li" <davidxl at google.com> >>> >>> On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini <mehdi.amini at apple.com> >>> wrote: >>>> >>>> In the current case at stake: the issue is that we can't make the >>>> Analysis library using anything from the ProfileData library. Conceptually >>>> there is a problem IMO. >>>> >>> >>> >>> Yes -- this is a very good point. >>> >>> Independent of anything else, +1. >>> >> >> The design of ProfileData and reading profile information in the entire >> middle end had a really fundamental invariant that folks seem to have lost >> track of: >> > > Not sure about what you mean by 'lost track of'. > >> >> a) There is exactly *one* way to get at profile information from general >> analyses and transforms: a dedicated analysis pass that manages access to >> the profile info. >> > > > This is not the case as of today. BPI is a dedicated analysis pass to > manage branch probability profile information, but this pass is only used > in limited situations (e.g, for BFI, profile update in jump-threading etc) > -- using it it requires more memory as well as incremental update > interfaces. Many transformation passes simply skip it and directly access > the meta data in IR. > > Really? Which ones? I see a number of passes that know about profiling > metadata so they can preserve it, or transfer it across restructuring, but > nothing that really interprets it on its own in a non-trivial way. >In a lot of cases, the client code simply set the metadata, but the user clients include: SimplifyCFG.cpp, Locals.cpp, CodeGenPrepare.cpp, etc. David> > I'm not sure this is desirable regardless. > > > >> >> b) There is exactly *one* way for this analysis to compute this >> information from an *external* profile source: profile metadata attached to >> the IR. >> > > > This is the case already -- all profile data are annotated to the IR via > analysis pass (or in FE based instrumentation case, by FE during llvm code > gen). > > > >> >> c) There could be many external profile sources, but all of them should >> be read and then translated into metadata annotations on the IR so that >> serialization / deserialization preserve them in a common format and we can >> reason about how they work. >> >> > This should be the case already -- for instance sample and instrumentation > based IR share the same annotation for branch probability, entry count and > profile summary. > > >> >> This layering is why it is only a transform that accesses ProfileData -- >> it is responsible for annotating the IR and nothing else. Then the >> analysis uses these annotations and never reads the data directly. >> >> I think this is a really important separation of concerns as it ensures >> that we don't get an explosion of different analyses supporting various >> different subsets of profile sources. >> >> >> Now, the original design only accounted for profile information *within* >> a function body, clearly it needs to be extended to support intraprocedural >> information. >> > > > Not sure what you mean. Profile data in general does not extend to IPA > (we will reopen discussion on that soon), but profile summary is > 'invariant'/readonly data, which should be available to IPA already. > > IPA-level profiling data might be invariant, but inside the function it > certainly need to change because the code inside functions is changed > (branches are eliminated, transformed into selects, etc.) > > -Hal > > > David > > > >> But I would still expect that to follow a similar layering where we first >> read the data into IR annotations, then have an analysis pass (this time a >> module analysis pass in all likelihood) that brokers access to these >> annotations through an API that can do intelligent things like synthesizing >> it from the "cold" attribute or whatever when missing. >> > > >> >> -Chandler >> > > > > > -- > Hal Finkel > Assistant Computational Scientist > Leadership Computing Facility > Argonne National Laboratory >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/06d78f87/attachment-0001.html>
Hal Finkel via llvm-dev
2016-Apr-18 23:40 UTC
[llvm-dev] Move InlineCost.cpp out of Analysis?
----- Original Message -----> From: "Xinliang David Li" <davidxl at google.com> > To: "Hal Finkel" <hfinkel at anl.gov> > Cc: "via llvm-dev" <llvm-dev at lists.llvm.org>, "Mehdi Amini" > <mehdi.amini at apple.com>, "Chandler Carruth" <chandlerc at gmail.com> > Sent: Monday, April 18, 2016 6:30:44 PM > Subject: Re: [llvm-dev] Move InlineCost.cpp out of Analysis?> On Mon, Apr 18, 2016 at 3:57 PM, Hal Finkel < hfinkel at anl.gov > > wrote:> > > From: "Xinliang David Li" < davidxl at google.com > > > > > > > To: "Chandler Carruth" < chandlerc at gmail.com > > > > > > > Cc: "Hal Finkel" < hfinkel at anl.gov >, "via llvm-dev" < > > > llvm-dev at lists.llvm.org >, "Mehdi Amini" < mehdi.amini at apple.com > > > > > > > > > > Sent: Monday, April 18, 2016 5:45:21 PM > > > > > > Subject: Re: [llvm-dev] Move InlineCost.cpp out of Analysis? > > >> > > On Mon, Apr 18, 2016 at 3:00 PM, Chandler Carruth < > > > chandlerc at gmail.com > wrote: > > >> > > > On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel < hfinkel at anl.gov > > > > > wrote: > > > > > >> > > > > > From: "Xinliang David Li" < davidxl at google.com > > > > > > > > > > > > > > > >> > > > > > On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini < > > > > > > mehdi.amini at apple.com > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > In the current case at stake: the issue is that we can't > > > > > > > make > > > > > > > the > > > > > > > Analysis library using anything from the ProfileData > > > > > > > library. > > > > > > > Conceptually there is a problem IMO. > > > > > > > > > > > > > > > > > > > > > > > > > > > Yes -- this is a very good point. > > > > > > > > > > > > > > > > > > > > Independent of anything else, +1. > > > > > > > > > >> > > > The design of ProfileData and reading profile information in > > > > the > > > > entire middle end had a really fundamental invariant that folks > > > > seem > > > > to have lost track of: > > > > > > > > > Not sure about what you mean by 'lost track of'. > > > > > > > a) There is exactly *one* way to get at profile information > > > > from > > > > general analyses and transforms: a dedicated analysis pass that > > > > manages access to the profile info. > > > > > > > > > This is not the case as of today. BPI is a dedicated analysis > > > pass > > > to > > > manage branch probability profile information, but this pass is > > > only > > > used in limited situations (e.g, for BFI, profile update in > > > jump-threading etc) -- using it it requires more memory as well > > > as > > > incremental update interfaces. Many transformation passes simply > > > skip it and directly access the meta data in IR. > > >> > Really? Which ones? I see a number of passes that know about > > profiling metadata so they can preserve it, or transfer it across > > restructuring, but nothing that really interprets it on its own in > > a > > non-trivial way. >> In a lot of cases, the client code simply set the metadata, but the > user clients include:> SimplifyCFG.cpp, Locals.cpp, CodeGenPrepare.cpp, etc.I don't think any of these files contain code that makes decisions based on profiling data. The code is just preserving existing local information as the code is restructured. Is there something in specific you have in mind? Given that BPI is non-trivial to compute, as you point out, perhaps this code should be preserving BPI, but that's another matter. -Hal> David> > I'm not sure this is desirable regardless. >> > > > b) There is exactly *one* way for this analysis to compute this > > > > information from an *external* profile source: profile metadata > > > > attached to the IR. > > > > > > > > > This is the case already -- all profile data are annotated to the > > > IR > > > via analysis pass (or in FE based instrumentation case, by FE > > > during > > > llvm code gen). > > >> > > > c) There could be many external profile sources, but all of > > > > them > > > > should be read and then translated into metadata annotations on > > > > the > > > > IR so that serialization / deserialization preserve them in a > > > > common > > > > format and we can reason about how they work. > > > > > >> > > This should be the case already -- for instance sample and > > > instrumentation based IR share the same annotation for branch > > > probability, entry count and profile summary. > > >> > > > This layering is why it is only a transform that accesses > > > > ProfileData > > > > -- it is responsible for annotating the IR and nothing else. > > > > Then > > > > the analysis uses these annotations and never reads the data > > > > directly. > > > > > >> > > > I think this is a really important separation of concerns as it > > > > ensures that we don't get an explosion of different analyses > > > > supporting various different subsets of profile sources. > > > > > >> > > > Now, the original design only accounted for profile information > > > > *within* a function body, clearly it needs to be extended to > > > > support > > > > intraprocedural information. > > > > > > > > > Not sure what you mean. Profile data in general does not extend > > > to > > > IPA (we will reopen discussion on that soon), but profile summary > > > is > > > 'invariant'/readonly data, which should be available to IPA > > > already. > > > > > IPA-level profiling data might be invariant, but inside the > > function > > it certainly need to change because the code inside functions is > > changed (branches are eliminated, transformed into selects, etc.) >> > -Hal >> > > David > > >> > > > But I would still expect that to follow a similar layering > > > > where > > > > we > > > > first read the data into IR annotations, then have an analysis > > > > pass > > > > (this time a module analysis pass in all likelihood) that > > > > brokers > > > > access to these annotations through an API that can do > > > > intelligent > > > > things like synthesizing it from the "cold" attribute or > > > > whatever > > > > when missing. > > > > > > > > > > -Chandler > > > > > > > > -- >> > Hal Finkel > > > Assistant Computational Scientist > > > Leadership Computing Facility > > > Argonne National Laboratory >-- Hal Finkel Assistant Computational Scientist Leadership Computing Facility Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/9d03eac5/attachment.html>
Chandler Carruth via llvm-dev
2016-Apr-18 23:50 UTC
[llvm-dev] Move InlineCost.cpp out of Analysis?
On Mon, Apr 18, 2016 at 4:30 PM Xinliang David Li <davidxl at google.com> wrote:> >> This is not the case as of today. BPI is a dedicated analysis pass to >> manage branch probability profile information, but this pass is only used >> in limited situations (e.g, for BFI, profile update in jump-threading etc) >> -- using it it requires more memory as well as incremental update >> interfaces. Many transformation passes simply skip it and directly access >> the meta data in IR. >> >> Really? Which ones? I see a number of passes that know about profiling >> metadata so they can preserve it, or transfer it across restructuring, but >> nothing that really interprets it on its own in a non-trivial way. >> > > In a lot of cases, the client code simply set the metadata, but the user > clients include: >I want to reiterate that if this is the case, I believe these are bugs that we need to fix. As a consequence I quickly did an audit of the places you mentioned...> > SimplifyCFG.cpp, >Where? I skimmed the uses, and I only found code that uses to update metadata, not to reason about it. If you know of something that its it would be really nice to point out> Locals.cpp, >I assume you mean lib/Transforms/Utils/Local.cpp? Same as above, all the uses look like update only.> CodeGenPrepare.cpp, etc. >Same story here -- extracted and re-applied but not analyzed. Anyways, if you do spot clients actually using metadata directly rather than using the analysis pass to reason about it, we should fix them to use the analysis passes instead. The scaling issues with the analysis pass were fixed really nicely BTW. I forget whose patch that was, but I think it was Cong? -Chandler -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/7d26fc8c/attachment.html>