Hal Finkel
2012-May-14 21:14 UTC
[LLVMdev] SIV tests in LoopDependence Analysis, Sanjoy's patch
On Mon, 14 May 2012 13:54:19 -0700 Preston Briggs <preston.briggs at gmail.com> wrote:> On Mon, May 14, 2012 at 1:30 PM, Hal Finkel <hfinkel at anl.gov> wrote: > > > Can you explain this comment: > > > With minor algebra, this test can also be used for things like > > > [c1 + a1*i + a2*j][c2]. > > It's really too simple to deserve mention... > Given a subscript pair, [c1 + a1*i + a2*j] and [c2], we can test for > dependence using the RDIV test by rewriting as [c1 + a1*i] and [c2 - > a2*j]. > > > I this look good, but, as you say below, we also need to work on the > > top-level infrastructure. It will be hard to validate and evaluate > > this code without a framework in which to use it. > > Yep. I've written a simple framework to do tests on my own, but it > needs to be extended to be more generally useful. > > > One thing that I would like to mention is that 'use' here should > > also include user feedback. This means being able to pass > > information back to the frontends about which loops are being > > effectively analyzed, and for loops that are not, why not. > > Absolutely. I've been thinking in terms of passing info back to the > programmer (see > https://sites.google.com/site/parallelizationforllvm/feedback). It's a > very interesting problem and one where I think there are real research > possibilities.Do you think that we should do this by adding metadata, or should we establish some kind of separate feedback channel? Metadata would make it more useful for writing regression tests (perhaps), but a separate feedback channel might be more useful for the front ends. Maybe we should have a separate feedback channel that, lacking any other consumer, writes out metadata? -Hal> > Preston-- Hal Finkel Postdoctoral Appointee Leadership Computing Facility Argonne National Laboratory
Preston Briggs
2012-May-14 22:18 UTC
[LLVMdev] SIV tests in LoopDependence Analysis, Sanjoy's patch
On Mon, May 14, 2012 at 2:14 PM, Hal Finkel <hfinkel at anl.gov> wrote:>> > One thing that I would like to mention is that 'use' here should >> > also include user feedback. This means being able to pass >> > information back to the frontends about which loops are being >> > effectively analyzed, and for loops that are not, why not. >> >> Absolutely. I've been thinking in terms of passing info back to the >> programmer (see >> https://sites.google.com/site/parallelizationforllvm/feedback). It's a >> very interesting problem and one where I think there are real research >> possibilities. > > Do you think that we should do this by adding metadata, or should we > establish some kind of separate feedback channel? Metadata would make > it more useful for writing regression tests (perhaps), but a separate > feedback channel might be more useful for the front ends. Maybe we > should have a separate feedback channel that, lacking any other > consumer, writes out metadata?I don't know what's best. Probably different uses merit different mechanisms. At Tera, we did regression tests by using command-line flags to provoke particular passes to dump so-called signature information to stderr. For example, -trace:PAR_SIG would cause the parallelizer to dump out a condensed account of what it did for each loop nest. Similarly, -trace:LS_SIG would cause the loop scheduler (i.e., software pipeliner) to dump a summary for each inner loop. As part of each night's tests, we'd compare the signatures against standards and report differences. Later, we developed a tool called "Canal" that took essentially the same information and used it to produce an annotated listing, where each loop nest was marked to indicate what the parallelizer had done to it, each inner loop was decorated with info about what happened during software pipelining, etc. Still later, we built a GUI-tool to report the same info in a little more convenient fashion. The most useful information came from the various transformation passes. For example, the parallelizer reporting that certain loop-carried dependences prevented parallelization; the software pipeliner would report on the length of inner loops (in ticks) , the balance between memory references and floating-point ops, and so forth. Preston
Hal Finkel
2012-May-14 22:52 UTC
[LLVMdev] SIV tests in LoopDependence Analysis, Sanjoy's patch
On Mon, 14 May 2012 15:18:12 -0700 Preston Briggs <preston.briggs at gmail.com> wrote:> On Mon, May 14, 2012 at 2:14 PM, Hal Finkel <hfinkel at anl.gov> wrote: > >> > One thing that I would like to mention is that 'use' here should > >> > also include user feedback. This means being able to pass > >> > information back to the frontends about which loops are being > >> > effectively analyzed, and for loops that are not, why not. > >> > >> Absolutely. I've been thinking in terms of passing info back to the > >> programmer (see > >> https://sites.google.com/site/parallelizationforllvm/feedback). > >> It's a very interesting problem and one where I think there are > >> real research possibilities. > > > > Do you think that we should do this by adding metadata, or should we > > establish some kind of separate feedback channel? Metadata would > > make it more useful for writing regression tests (perhaps), but a > > separate feedback channel might be more useful for the front ends. > > Maybe we should have a separate feedback channel that, lacking any > > other consumer, writes out metadata? > > I don't know what's best. Probably different uses merit different > mechanisms.While this is true, I would prefer we have a framework to handle this so that it does not turn into a 'mechanism zoo'.> > At Tera, we did regression tests by using command-line flags to > provoke particular passes to dump so-called signature information to > stderr. For example, -trace:PAR_SIG would cause the parallelizer to > dump out a condensed account of what it did for each loop nest. > Similarly, -trace:LS_SIG would cause the loop scheduler (i.e., > software pipeliner) to dump a summary for each inner loop. As part of > each night's tests, we'd compare the signatures against standards and > report differences. > > Later, we developed a tool called "Canal" that took essentially the > same information and used it to produce an annotated listing, where > each loop nest was marked to indicate what the parallelizer had done > to it, each inner loop was decorated with info about what happened > during software pipelining, etc. Still later, we built a GUI-tool to > report the same info in a little more convenient fashion.This is what I would like to enable, but what I don't want is to have this involve parsing a bunch of arbitrarily-formatted text strings produced by different backend passes. Structured text is fine I think, we'd just need a way of attaching it to source locations and conveying it to the frontends. metadata is fine (as that is also structured), although it has the disadvantage that later optimization passes might drop it(?).> > The most useful information came from the various transformation > passes. For example, the parallelizer reporting that certain > loop-carried dependences prevented parallelization; the software > pipeliner would report on the length of inner loops (in ticks) , the > balance between memory references and floating-point ops, and so > forth.This certainly makes sense. FWIW, I think that we should have a system such that the transformation passes can trigger sending analysis information back to the frontend as part of this process. Loop-carried dependencies are one good use case, aliasing analysis information is, IMHO, another. -Hal> > Preston-- Hal Finkel Postdoctoral Appointee Leadership Computing Facility Argonne National Laboratory
Reasonably Related Threads
- [LLVMdev] SIV tests in LoopDependence Analysis, Sanjoy's patch
- [LLVMdev] SIV tests in LoopDependence Analysis, Sanjoy's patch
- [LLVMdev] SIV tests in LoopDependence Analysis, Sanjoy's patch
- [LLVMdev] SIV tests in LoopDependence Analysis, Sanjoy's patch
- [LLVMdev] SIV tests in LoopDependence Analysis, Sanjoy's patch