Sanjoy Das via llvm-dev
2017-Dec-15 07:22 UTC
[llvm-dev] RFC: Exposing TargetTransformInfo factories from TargetMachine
Hi all, I'd like to be able to use TargetTransformInfo to make architecture specific code generation decisions from the XLA LLVM IR emitter[0]. However, we don't have a great way to do this today -- TargetTransformInfo is wrapped in a TargetIRAnalysis and getting the TargetTransformInfo out of it requires something like: FunctionAnalysisManager DummyFAM; TTI = TIRA.run(F, DummyFAM); return *TTI; which isn't ideal. Given that all in-tree backends have a factory function to directly construct a TargetTransformInfo implementation anyway, what do you think about exposing said factory function from the TargetMachine subclasses directly? Something conceptually like this https://reviews.llvm.org/D41268 but for all backends and will less std::function? [0]: XLA is a machine learning focussed linear algebra compiler https://www.tensorflow.org/performance/xla/ that uses LLVM for its CPU and GPU backends. -- Sanjoy
Hal Finkel via llvm-dev
2017-Dec-15 13:30 UTC
[llvm-dev] RFC: Exposing TargetTransformInfo factories from TargetMachine
On 12/15/2017 01:22 AM, Sanjoy Das via llvm-dev wrote:> Hi all, > > I'd like to be able to use TargetTransformInfo to make architecture > specific code generation decisions from the XLA LLVM IR emitter[0]. > However, we don't have a great way to do this today -- > TargetTransformInfo is wrapped in a TargetIRAnalysis and getting the > TargetTransformInfo out of it requires something like: > > FunctionAnalysisManager DummyFAM; > TTI = TIRA.run(F, DummyFAM); > return *TTI; > > which isn't ideal. > > Given that all in-tree backends have a factory function to directly > construct a TargetTransformInfo implementation anyway, what do you > think about exposing said factory function from the TargetMachine > subclasses directly? Something conceptually like this > https://reviews.llvm.org/D41268 but for all backends and will less > std::function?Are there reasons why we might not want to do this? Other options we should consider? -Hal> > [0]: XLA is a machine learning focussed linear algebra compiler > https://www.tensorflow.org/performance/xla/ that uses LLVM for its CPU > and GPU backends. > > -- Sanjoy > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-- Hal Finkel Lead, Compiler Technology and Programming Languages Leadership Computing Facility Argonne National Laboratory
Sanjoy Das via llvm-dev
2017-Dec-15 18:12 UTC
[llvm-dev] RFC: Exposing TargetTransformInfo factories from TargetMachine
On Fri, Dec 15, 2017 at 5:30 AM, Hal Finkel <hfinkel at anl.gov> wrote:> Are there reasons why we might not want to do this? Other options we should > consider?It does make the TargetMachine -> TargetIRAnalysis path less abstract, but given that all targets have the same pattern of instantiating a TargetIRAnalysis with a Function->TargetTransformInfo hook, the abstraction does not seem particularly useful. I might do even a simpler form of the patch though -- instead of returning a function pointer from TargetMachine, just add a virtual function to TargetMachine that creates the TargetTransformInfo directly from a Function. -- Sanjoy> > -Hal > >> >> [0]: XLA is a machine learning focussed linear algebra compiler >> https://www.tensorflow.org/performance/xla/ that uses LLVM for its CPU >> and GPU backends. >> >> -- Sanjoy >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > > > -- > Hal Finkel > Lead, Compiler Technology and Programming Languages > Leadership Computing Facility > Argonne National Laboratory >