Chandler Carruth via llvm-dev
2016-Apr-18 22:00 UTC
[llvm-dev] Move InlineCost.cpp out of Analysis?
On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel <hfinkel at anl.gov> wrote:> > > ------------------------------ > > *From: *"Xinliang David Li" <davidxl at google.com> > > On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini <mehdi.amini at apple.com> > wrote: >> >> In the current case at stake: the issue is that we can't make the >> Analysis library using anything from the ProfileData library. Conceptually >> there is a problem IMO. >> > > > Yes -- this is a very good point. > > Independent of anything else, +1. >The design of ProfileData and reading profile information in the entire middle end had a really fundamental invariant that folks seem to have lost track of: a) There is exactly *one* way to get at profile information from general analyses and transforms: a dedicated analysis pass that manages access to the profile info. b) There is exactly *one* way for this analysis to compute this information from an *external* profile source: profile metadata attached to the IR. c) There could be many external profile sources, but all of them should be read and then translated into metadata annotations on the IR so that serialization / deserialization preserve them in a common format and we can reason about how they work. This layering is why it is only a transform that accesses ProfileData -- it is responsible for annotating the IR and nothing else. Then the analysis uses these annotations and never reads the data directly. I think this is a really important separation of concerns as it ensures that we don't get an explosion of different analyses supporting various different subsets of profile sources. Now, the original design only accounted for profile information *within* a function body, clearly it needs to be extended to support intraprocedural information. But I would still expect that to follow a similar layering where we first read the data into IR annotations, then have an analysis pass (this time a module analysis pass in all likelihood) that brokers access to these annotations through an API that can do intelligent things like synthesizing it from the "cold" attribute or whatever when missing. -Chandler -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/42a635cc/attachment.html>
Easwaran Raman via llvm-dev
2016-Apr-18 22:20 UTC
[llvm-dev] Move InlineCost.cpp out of Analysis?
On Mon, Apr 18, 2016 at 3:00 PM, Chandler Carruth via llvm-dev < llvm-dev at lists.llvm.org> wrote:> On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel <hfinkel at anl.gov> wrote: > >> >> >> ------------------------------ >> >> *From: *"Xinliang David Li" <davidxl at google.com> >> >> On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini <mehdi.amini at apple.com> >> wrote: >>> >>> In the current case at stake: the issue is that we can't make the >>> Analysis library using anything from the ProfileData library. Conceptually >>> there is a problem IMO. >>> >> >> >> Yes -- this is a very good point. >> >> Independent of anything else, +1. >> > > The design of ProfileData and reading profile information in the entire > middle end had a really fundamental invariant that folks seem to have lost > track of: > > a) There is exactly *one* way to get at profile information from general > analyses and transforms: a dedicated analysis pass that manages access to > the profile info. > > b) There is exactly *one* way for this analysis to compute this > information from an *external* profile source: profile metadata attached to > the IR. > > c) There could be many external profile sources, but all of them should be > read and then translated into metadata annotations on the IR so that > serialization / deserialization preserve them in a common format and we can > reason about how they work. > > > This layering is why it is only a transform that accesses ProfileData -- > it is responsible for annotating the IR and nothing else. Then the > analysis uses these annotations and never reads the data directly. > > I think this is a really important separation of concerns as it ensures > that we don't get an explosion of different analyses supporting various > different subsets of profile sources. > > > Now, the original design only accounted for profile information *within* a > function body, clearly it needs to be extended to support intraprocedural > information. But I would still expect that to follow a similar layering > where we first read the data into IR annotations, then have an analysis > pass (this time a module analysis pass in all likelihood) that brokers > access to these annotations through an API that can do intelligent things > like synthesizing it from the "cold" attribute or whatever when missing. > >Invariants b) and c) above are still true, but a) is not since InlineCost directly calls ProfileSummary instead of through an analysis pass. I'll add a module level pass as you suggest, but that still needs breaking the ProfileData->Analysis dependence chain. - Easwaran> -Chandler > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/f7cdffa6/attachment-0001.html>
Chandler Carruth via llvm-dev
2016-Apr-18 22:25 UTC
[llvm-dev] Move InlineCost.cpp out of Analysis?
On Mon, Apr 18, 2016 at 3:20 PM Easwaran Raman <eraman at google.com> wrote:> On Mon, Apr 18, 2016 at 3:00 PM, Chandler Carruth via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel <hfinkel at anl.gov> wrote: >> >>> >>> >>> ------------------------------ >>> >>> *From: *"Xinliang David Li" <davidxl at google.com> >>> >>> On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini <mehdi.amini at apple.com> >>> wrote: >>>> >>>> In the current case at stake: the issue is that we can't make the >>>> Analysis library using anything from the ProfileData library. Conceptually >>>> there is a problem IMO. >>>> >>> >>> >>> Yes -- this is a very good point. >>> >>> Independent of anything else, +1. >>> >> >> The design of ProfileData and reading profile information in the entire >> middle end had a really fundamental invariant that folks seem to have lost >> track of: >> >> a) There is exactly *one* way to get at profile information from general >> analyses and transforms: a dedicated analysis pass that manages access to >> the profile info. >> >> b) There is exactly *one* way for this analysis to compute this >> information from an *external* profile source: profile metadata attached to >> the IR. >> >> c) There could be many external profile sources, but all of them should >> be read and then translated into metadata annotations on the IR so that >> serialization / deserialization preserve them in a common format and we can >> reason about how they work. >> >> >> This layering is why it is only a transform that accesses ProfileData -- >> it is responsible for annotating the IR and nothing else. Then the >> analysis uses these annotations and never reads the data directly. >> >> I think this is a really important separation of concerns as it ensures >> that we don't get an explosion of different analyses supporting various >> different subsets of profile sources. >> >> >> Now, the original design only accounted for profile information *within* >> a function body, clearly it needs to be extended to support intraprocedural >> information. But I would still expect that to follow a similar layering >> where we first read the data into IR annotations, then have an analysis >> pass (this time a module analysis pass in all likelihood) that brokers >> access to these annotations through an API that can do intelligent things >> like synthesizing it from the "cold" attribute or whatever when missing. >> >> > Invariants b) and c) above are still true, but a) is not since InlineCost > directly calls ProfileSummary instead of through an analysis pass. >Not quite -- ProfileSummary seems to only exist in the profile *reading* code, so it isn't *just* an annotation on the IR. I think the problem here is that we have failed to build a proper separate abstraction around the interprocedural profile data that is extracted from IR annotations. That abstraction should have nothing to do with reading profile data and so shouldn't live in the ProifileData library, it should (IMO) live in the Analysis library. I'll add a module level pass as you suggest, but that still needs> breaking the ProfileData->Analysis dependence chain. >Well, this will also re-highlight the fundamental pass manager problem as you won't have easy access to this analysis pass.> > - Easwaran > >> -Chandler >> > >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >> >>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/9cf669a4/attachment.html>
Xinliang David Li via llvm-dev
2016-Apr-18 22:45 UTC
[llvm-dev] Move InlineCost.cpp out of Analysis?
On Mon, Apr 18, 2016 at 3:00 PM, Chandler Carruth <chandlerc at gmail.com> wrote:> On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel <hfinkel at anl.gov> wrote: > >> >> >> ------------------------------ >> >> *From: *"Xinliang David Li" <davidxl at google.com> >> >> On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini <mehdi.amini at apple.com> >> wrote: >>> >>> In the current case at stake: the issue is that we can't make the >>> Analysis library using anything from the ProfileData library. Conceptually >>> there is a problem IMO. >>> >> >> >> Yes -- this is a very good point. >> >> Independent of anything else, +1. >> > > The design of ProfileData and reading profile information in the entire > middle end had a really fundamental invariant that folks seem to have lost > track of: >Not sure about what you mean by 'lost track of'.> > a) There is exactly *one* way to get at profile information from general > analyses and transforms: a dedicated analysis pass that manages access to > the profile info. >This is not the case as of today. BPI is a dedicated analysis pass to manage branch probability profile information, but this pass is only used in limited situations (e.g, for BFI, profile update in jump-threading etc) -- using it it requires more memory as well as incremental update interfaces. Many transformation passes simply skip it and directly access the meta data in IR.> > b) There is exactly *one* way for this analysis to compute this > information from an *external* profile source: profile metadata attached to > the IR. >This is the case already -- all profile data are annotated to the IR via analysis pass (or in FE based instrumentation case, by FE during llvm code gen).> > c) There could be many external profile sources, but all of them should be > read and then translated into metadata annotations on the IR so that > serialization / deserialization preserve them in a common format and we can > reason about how they work. > >This should be the case already -- for instance sample and instrumentation based IR share the same annotation for branch probability, entry count and profile summary.> > This layering is why it is only a transform that accesses ProfileData -- > it is responsible for annotating the IR and nothing else. Then the > analysis uses these annotations and never reads the data directly. > > I think this is a really important separation of concerns as it ensures > that we don't get an explosion of different analyses supporting various > different subsets of profile sources. > > > Now, the original design only accounted for profile information *within* a > function body, clearly it needs to be extended to support intraprocedural > information. >Not sure what you mean. Profile data in general does not extend to IPA (we will reopen discussion on that soon), but profile summary is 'invariant'/readonly data, which should be available to IPA already. David> But I would still expect that to follow a similar layering where we first > read the data into IR annotations, then have an analysis pass (this time a > module analysis pass in all likelihood) that brokers access to these > annotations through an API that can do intelligent things like synthesizing > it from the "cold" attribute or whatever when missing. >> > -Chandler >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/8efead10/attachment.html>
Hal Finkel via llvm-dev
2016-Apr-18 22:57 UTC
[llvm-dev] Move InlineCost.cpp out of Analysis?
----- Original Message -----> From: "Xinliang David Li" <davidxl at google.com> > To: "Chandler Carruth" <chandlerc at gmail.com> > Cc: "Hal Finkel" <hfinkel at anl.gov>, "via llvm-dev" > <llvm-dev at lists.llvm.org>, "Mehdi Amini" <mehdi.amini at apple.com> > Sent: Monday, April 18, 2016 5:45:21 PM > Subject: Re: [llvm-dev] Move InlineCost.cpp out of Analysis?> On Mon, Apr 18, 2016 at 3:00 PM, Chandler Carruth < > chandlerc at gmail.com > wrote:> > On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel < hfinkel at anl.gov > > > wrote: >> > > > From: "Xinliang David Li" < davidxl at google.com > > > > > > >> > > > On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini < > > > > mehdi.amini at apple.com > > > > > wrote: > > > > > > > > > > > In the current case at stake: the issue is that we can't make > > > > > the > > > > > Analysis library using anything from the ProfileData library. > > > > > Conceptually there is a problem IMO. > > > > > > > > > > > > > > Yes -- this is a very good point. > > > > > > > > > Independent of anything else, +1. > > >> > The design of ProfileData and reading profile information in the > > entire middle end had a really fundamental invariant that folks > > seem > > to have lost track of: > > Not sure about what you mean by 'lost track of'. > > a) There is exactly *one* way to get at profile information from > > general analyses and transforms: a dedicated analysis pass that > > manages access to the profile info. > > This is not the case as of today. BPI is a dedicated analysis pass to > manage branch probability profile information, but this pass is only > used in limited situations (e.g, for BFI, profile update in > jump-threading etc) -- using it it requires more memory as well as > incremental update interfaces. Many transformation passes simply > skip it and directly access the meta data in IR.Really? Which ones? I see a number of passes that know about profiling metadata so they can preserve it, or transfer it across restructuring, but nothing that really interprets it on its own in a non-trivial way. I'm not sure this is desirable regardless.> > b) There is exactly *one* way for this analysis to compute this > > information from an *external* profile source: profile metadata > > attached to the IR. > > This is the case already -- all profile data are annotated to the IR > via analysis pass (or in FE based instrumentation case, by FE during > llvm code gen).> > c) There could be many external profile sources, but all of them > > should be read and then translated into metadata annotations on the > > IR so that serialization / deserialization preserve them in a > > common > > format and we can reason about how they work. >> This should be the case already -- for instance sample and > instrumentation based IR share the same annotation for branch > probability, entry count and profile summary.> > This layering is why it is only a transform that accesses > > ProfileData > > -- it is responsible for annotating the IR and nothing else. Then > > the analysis uses these annotations and never reads the data > > directly. >> > I think this is a really important separation of concerns as it > > ensures that we don't get an explosion of different analyses > > supporting various different subsets of profile sources. >> > Now, the original design only accounted for profile information > > *within* a function body, clearly it needs to be extended to > > support > > intraprocedural information. > > Not sure what you mean. Profile data in general does not extend to > IPA (we will reopen discussion on that soon), but profile summary is > 'invariant'/readonly data, which should be available to IPA already.IPA-level profiling data might be invariant, but inside the function it certainly need to change because the code inside functions is changed (branches are eliminated, transformed into selects, etc.) -Hal> David> > But I would still expect that to follow a similar layering where we > > first read the data into IR annotations, then have an analysis pass > > (this time a module analysis pass in all likelihood) that brokers > > access to these annotations through an API that can do intelligent > > things like synthesizing it from the "cold" attribute or whatever > > when missing. > > > -Chandler >-- Hal Finkel Assistant Computational Scientist Leadership Computing Facility Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/7586971a/attachment.html>
Chandler Carruth via llvm-dev
2016-Apr-18 22:59 UTC
[llvm-dev] Move InlineCost.cpp out of Analysis?
On Mon, Apr 18, 2016 at 3:45 PM Xinliang David Li <davidxl at google.com> wrote:> On Mon, Apr 18, 2016 at 3:00 PM, Chandler Carruth <chandlerc at gmail.com> > wrote: > >> On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel <hfinkel at anl.gov> wrote: >> >>> >>> >>> ------------------------------ >>> >>> *From: *"Xinliang David Li" <davidxl at google.com> >>> >>> On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini <mehdi.amini at apple.com> >>> wrote: >>>> >>>> In the current case at stake: the issue is that we can't make the >>>> Analysis library using anything from the ProfileData library. Conceptually >>>> there is a problem IMO. >>>> >>> >>> >>> Yes -- this is a very good point. >>> >>> Independent of anything else, +1. >>> >> >> The design of ProfileData and reading profile information in the entire >> middle end had a really fundamental invariant that folks seem to have lost >> track of: >> > > Not sure about what you mean by 'lost track of'. >I mean that we used to hold to these invariants but I think recently the code has started to not follow them as closely.> >> a) There is exactly *one* way to get at profile information from general >> analyses and transforms: a dedicated analysis pass that manages access to >> the profile info. >> > > > This is not the case as of today. >Again, my whole comment was that these are no longer being correctly followed.> BPI is a dedicated analysis pass to manage branch probability profile > information, but this pass is only used in limited situations (e.g, for > BFI, profile update in jump-threading etc) -- using it it requires more > memory as well as incremental update interfaces. Many transformation > passes simply skip it and directly access the meta data in IR. >Can you be more specific? BPI and BFI are used in *many* places, and on an initial inspection almost everywhere that accesses MD_prof directly appears to do so in order to *set* or *update* the profile information without doing detailed analysis on it. Which seems fine with my outline of the invariants?> >> Now, the original design only accounted for profile information *within* >> a function body, clearly it needs to be extended to support intraprocedural >> information. >> > > > Not sure what you mean. Profile data in general does not extend to IPA > (we will reopen discussion on that soon), but profile summary is > 'invariant'/readonly data, which should be available to IPA already. >I don't know what you mean by "invariant" or readonly data here. I think that whether or not the profile information is mutated shouldn't influence the design invariants I described above.>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/c82b4170/attachment.html>
Philip Reames via llvm-dev
2016-Apr-18 23:32 UTC
[llvm-dev] Move InlineCost.cpp out of Analysis?
On 04/18/2016 03:00 PM, Chandler Carruth via llvm-dev wrote:> On Mon, Apr 18, 2016 at 2:48 PM Hal Finkel <hfinkel at anl.gov > <mailto:hfinkel at anl.gov>> wrote: > > > > ------------------------------------------------------------------------ > > *From: *"Xinliang David Li" <davidxl at google.com > <mailto:davidxl at google.com>> > > On Mon, Apr 18, 2016 at 2:33 PM, Mehdi Amini > <mehdi.amini at apple.com <mailto:mehdi.amini at apple.com>> wrote: > > In the current case at stake: the issue is that we can't > make the Analysis library using anything from the > ProfileData library. Conceptually there is a problem IMO. > > > > Yes -- this is a very good point. > > Independent of anything else, +1. > > > The design of ProfileData and reading profile information in the > entire middle end had a really fundamental invariant that folks seem > to have lost track of: > > a) There is exactly *one* way to get at profile information from > general analyses and transforms: a dedicated analysis pass that > manages access to the profile info. > > b) There is exactly *one* way for this analysis to compute this > information from an *external* profile source: profile metadata > attached to the IR. > > c) There could be many external profile sources, but all of them > should be read and then translated into metadata annotations on the IR > so that serialization / deserialization preserve them in a common > format and we can reason about how they work. > > > This layering is why it is only a transform that accesses ProfileData > -- it is responsible for annotating the IR and nothing else. Then the > analysis uses these annotations and never reads the data directly. > > I think this is a really important separation of concerns as it > ensures that we don't get an explosion of different analyses > supporting various different subsets of profile sources. > > > Now, the original design only accounted for profile information > *within* a function body, clearly it needs to be extended to support > intraprocedural information. But I would still expect that to follow a > similar layering where we first read the data into IR annotations, > then have an analysis pass (this time a module analysis pass in all > likelihood) that brokers access to these annotations through an API > that can do intelligent things like synthesizing it from the "cold" > attribute or whatever when missing. > > -Chandler+1 to this. p.s. I have my own source of profiling information which I translate into metadata. The fact that "just works" is valuable and is definitely a design goal we should retain. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160418/2c496521/attachment.html>