On Sat, Feb 29, 2020 at 2:25 PM David Blaikie via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
>
>
> On Sat, Feb 29, 2020 at 2:19 PM Chris Lattner <clattner at
nondot.org> wrote:
>
>> On Feb 29, 2020, at 2:08 PM, David Blaikie <dblaikie at
gmail.com> wrote:
>>
>> I've
>>> curious as
>>> to how MLIR deals with IPO as that's the problem I was running
into.
>>>
>>
>> FWIW I believe LLVM's new pass manager (NPM) was designed with
>> parallelism and the ability to support this situation (that MLIR
doesn't?
>> Or doesn't to the degree/way in which the NPM does). I'll leave
it to folks
>> (Chandler probably has the most context here) to provide some more
detail
>> there if they can/have time.
>>
>>
>> Historically speaking, all of the LLVM pass managers have been designed
>> to support multithreaded compilation (check out the ancient history of
the
>> WritingAnLLVMPass <http://llvm.org/docs/WritingAnLLVMPass.html>
doc if
>> curious).
>>
>
> I think the specific thing that might'v been a bit different in the NPM
> was to do with analysis invalidation in a way that's more parallelism
> friendly than the previous one - but I may be
> misrepresenting/misundrstanding some of it.
>
>
>> The problem is that LLVM has global use-def chains on constants,
>> functions and globals, etc, so it is impractical to do this. Every
>> “inst->setOperand” would have to be able to take locks or use
something
>> like software transactional memory techniques in their implementation.
>> This would be very complicated and very slow.
>>
>
> Oh, yeah - I recall that particular limitation being discussed/not
> addressed as yet.
>
>
>> MLIR defines this away from the beginning. This is a result of the
core
>> IR design, not the pass manager design itself.
>>
>
> What does MLIR do differently here/how does it define that issue away?
> (doesn't have use-lists built-in?)
>
The major thing is that constants and global-like objects don't produce SSA
values and thus don't have use-lists.
https://mlir.llvm.org/docs/Rationale/#multithreading-the-compiler discusses
this a bit.
For constants, the data is stored as an Attribute(context uniqued metadata,
have no use-list, not SSA). This attribute can either placed in the
attribute list(if the operand is always constant, like for the value of a
switch case), otherwise it must be explicitly materialized via some
operation. For example, the `std.constant
<https://mlir.llvm.org/docs/Dialects/Standard/#constant-operation>`
operation will materialize an SSA value from some attribute data.
For references to functions and other global-like objects, we have a
non-SSA mechanism built around `symbols`. This is essentially using a
special attribute to reference the function by-name, instead of by ssa
value. You can find more information on MLIR symbols here
<https://mlir.llvm.org/docs/SymbolsAndSymbolTables/>.
Along with the above, there is a trait that can be attached to operations
called `IsolatedFromAbove
<https://mlir.llvm.org/docs/Traits/#isolatedfromabove>`. This essentially
means that no SSA values defined above a region can be referenced from
within that region. The pass manager only allows schedule passes on
operations that have this property, meaning that all pipelines are
implicitly multi-threaded.
The pass manager in MLIR was heavily inspired by the work on the new pass
manager in LLVM, but with specific constraints/requirements that are unique
to the design of MLIR. That being said, there are some usability features
added that would also make great additions to LLVM: instance specific pass
options and statistics, pipeline crash reproducer generation, etc.
Not sure if any of the above helps clarify, but happy to chat more if you
are interested.
-- River
> - Dave
>
>
>>
>> -Chris
>>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.llvm.org/pipermail/llvm-dev/attachments/20200229/9a978cca/attachment.html>