On Tue, Sep 25, 2018 at 7:37 PM Sanjay Patel <spatel at rotateright.com>
wrote:
>
>
> On Tue, Sep 25, 2018 at 2:28 PM Cameron McInally <cameron.mcinally at
nyu.edu>
> wrote:
>
>> On Tue, Sep 25, 2018 at 1:39 PM Sanjay Patel <spatel at
rotateright.com>
>> wrote:
>>
>>> I have 1 concern about adding an explicit fneg op to IR:
>>>
>>> Currently, fneg qualifies as a binop in IR (since there's no
other way
>>> to represent it), and we have IR transforms that are based on
matching that
>>> pattern (m_BinOp). With a proper unary fneg instruction, those
transforms
>>> are not going to fire anymore. So I think we'll need to
duplicate code to
>>> account for that difference (or hack the matcher to include fneg?).
>>>
>>
>>>
>> The existing regression tests probably aren't going to show
optimization
>>> failures for those cases*, so I'm not sure how to detect
regressions
>>> without doing an audit of FP transforms in instcombine and other
passes.
>>>
>>
>> Implementation details. ¯\_(ツ)_/¯
>>
>> Seriously though, this is interesting and something that I had not
>> considered. Thinking aloud, what if we caught the FNEG pattern in
>> the CreateFSub*(...) IRBuilder functions and generated an FNeg
instruction
>> instead? That way we could get rid of the m_FNeg(...) calls altogether.
>> They would just be replaced with something like: (I.getOpcode()
=>> Instruction::FNeg). Any transform that currently uses m_FNeg(...)
would
>> fail to build without an update.
>>
>
> We don't want to get rid of the m_FNeg calls. Those are safe because
> they're providing an abstraction to the callers. Ie, anywhere we use
> m_FNeg() or Builder.CreateFNeg() should Just Work regardless of how we
> implement the underlying fneg operation. (We have to fix those function
> themselves of course to deal with the new opcode.) It's the code that
> doesn't use those abstractions that has the potential to regress.
>
> Here's an example:
> https://reviews.llvm.org/rL343041
>
> The transform in question is in InstCombiner::foldShuffledBinop(). It
> moves a binop ahead of a vector shuffle. It's called for every vector
binop
> instruction, so if IR has a unary fneg and we want that to behave as it
> does today, we'd have to duplicate that fold for fneg (or fake it by
> substituting an fsub).
>
> But after rummaging around in instcombine at least, I'm actually not
too
> worried by the proposed change. Transforms on generic binops are not
common.
>
Oh, I see. So you're worried that an FNeg won't match the generic
m_c_BinOp(...) as it does now. That's a reasonable concern.
This is the first time I'm looking at foldShuffledBinop(...), so maybe a
naive question, but why not do similar shuffle canonicalizations on unary
(or ternary) operations? That may be a better fix in the long run.
> But glancing at Reassociate.cpp is scarier. It does a lot of stuff like
> this:
> if (BinaryOperator::isNeg(TheOp) || BinaryOperator::isFNeg(TheOp))
> X = BinaryOperator::getNegArgument(TheOp);
>
> I think that's going to fail without a (terrible) hack to treat the
> proposed fneg as a binop. So that's probably a preliminary requirement:
> find all uses of BinaryOperator::isFNeg() and update them to use m_FNeg().
> That should be done ahead of the IR change to limit damage. I don't
know if
> that's enough as-is, but we know that code will break without an
update.
>
Ah, yeah. So this worries me too. Both floating point and integer negate
are syntactic sugar for (-0-x). I did not know that. It would be ugly to
unwrap floating point negate without doing the same for integer negate. :/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.llvm.org/pipermail/llvm-dev/attachments/20180925/8cda6e53/attachment-0001.html>