>when we have a mask loaded from an external source (memory, function call
boundary, etc...) and a short sequence of vector ops
Mask value from function call parameter is common. OpenMP declare simd function
does exactly that for the masked cases.
From: Philip Reames [mailto:listmail at philipreames.com]
Sent: Thursday, January 31, 2019 4:05 PM
To: Robin Kruppe <robin.kruppe at gmail.com>
Cc: David Greene <dag at cray.com>; via llvm-dev <llvm-dev at
lists.llvm.org>; Saito, Hideki <hideki.saito at intel.com>; Topper,
Craig <craig.topper at intel.com>; Maslov, Sergey V <sergey.v.maslov at
intel.com>
Subject: Re: [llvm-dev] [RFC] Vector Predication
On 1/31/19 1:14 PM, Robin Kruppe wrote:
On Thu, 31 Jan 2019 at 20:17, Philip Reames via llvm-dev <llvm-dev at
lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:
On 1/31/19 11:03 AM, David Greene wrote:> Philip Reames <listmail at philipreames.com<mailto:listmail at
philipreames.com>> writes:
>
>> Question 1 - Why do we need separate mask and lengths? Can't the
>> length be easily folded into the mask operand?
>>
>> e.g. newmask = (<4 x i1>)((i4)%y & (1 << %L -1))
>> and then pattern matched in the backend if needed
> I'm a little concerned about how difficult it will be to maintain
enough
> information throughout compilation to be able to match this on a machine
> with an explicit vector length value.
Does the hardware *also* have a mask register? If so, this is a likely
minor code quality issue which can be incrementally refined on. If it
doesn't, then I can see your concern.
Masking/predication is supported nearly universally, but I don't think the
code quality issue is minor. It would be on a typical packed-SIMD machine with
128/256/512 bit registers, but the processors with a vector length register are
usually built with much larger registers files and without a corresponding
increase in the number of functional units. For example, 4096 bit per vector
register is really quite modest for this kind of machine, while the data path
can reasonable be "only" 128 or 256 bit.
This changes the calculus quite a bit: vector lengths much shorter or minimally
larger than one full register are suddenly reasonable common (in application
code, not so much in HPC kernels) and because each vector instruction is split
into many data-path-sized uops, it's trivial and very rewarding to cut
processing short halfway through a vector. The efficiency of "short vector
code" then depends on the ability to finish each operation on those short
vectors relatively quickly rather than padding everything to a full vector
register.
For example, if a loop with a trip count of 20 is vectorized on a machine with
64 elements per vector (that's 64b elements in a 4096b register, so this is
lowballing it!), using only masks and not the vector length register makes your
vector unit do about three times more work than it would have to if you set the
vector length register to 20. That keeps the register file and functional units
busy for no good reason. Some microarchitectures take on the burden of
determining when a whole chunk of the vector is masked out and can then skip
over it quickly, but many others don't. So you're likely burning a whole
bunch of power and quite possibly taking up cycles that could be filled with
useful work from other instructions instead.
Thank you for the explanation.
Do such architectures frequently have arithmetic operations on the mask
registers? (i.e. can I reasonable compute a conservative length given a mask
register value) If I can, then having a mask as the canonical form and
re-deriving the length register from a mask for a sequence of instructions which
share a predicate seems fairly reasonable. Note that I'm assuming this as a
fallback, and that the common case is handled via the equivalent of
ComputeKnownBits on the mask itself at compile time.
The only case where the combination of a CKB and dynamic mask->length
fallback wouldn't handle reliably is when we have a mask loaded from an
external source (memory, function call boundary, etc...) and a short sequence of
vector ops. Are such really common enough that it needs to be a first class
element of the design?
p.s. To make sure my tone is coming across correctly, let me spell out that
I'm not convinced, but I'm not actively objecting. I'm playing
devils advocate for the purposes of fleshing out a design, but if folks more
knowledgeable than I strongly believe the right design requires both masks and
lengths, I'm happy to defer on that point.
Cheers,
Robin
>> Question 2 - Have you explored using selects instead? What practical
>> problems do you run into which make you believe explicit predication
>> is required?
>>
>> e.g. %sub = fsub <4 x float> %x, %y
>> %result = select <4 x i1> %M, <4 x float> %sub, undef
> That is semantically incorrect. According to IR semantics, the fsub is
> fully evaluated before the select comes along. It could trap for
> elements where %M is 0, whereas a masked intrinsic conveys the proper
> semantics of masking traps for masked-out elements. We need intrinsics
> and eventually (IMHO) fully first-class predication to make this work
> properly.
If you want specific trap behavior, you need to use the constrained
family of intrinsics instead. In IR, fsub is expected not to trap.
We have an existing solution for modeling FP environment aspects such as
rounding and trapping. The proposed signatures for your EVL proposal do
not appear to subsume those, and you've not proposed their retirement.
We definitely don't want *two* ways of describing FP trapping.
In other words, I don't find this reason compelling since my example can
simply be rewritten using the appropriate constrained intrinsic.
>
>> My context for these questions is that my experience recently w/o
>> existing masked intrinsics shows us missing fairly basic
>> optimizations, precisely because they weren't able to reuse all of
the
>> existing infrastructure. (I've been working on
>> SimplifyDemandedVectorElts recently for exactly this reason.) My
>> concern is that your EVL proposal will end up in the same state.
> I think that's just the nature of the beast. We need IR-level support
> for masking and we have to teach LLVM about it.
I'm solidly of the opinion that we already *have* IR support for
explicit masking in the form of gather/scatter/etc... Until someone has
taken the effort to make masking in this context *actually work well*,
I'm unconvinced that we should greatly expand the usage in the
IR.>
> -David
_______________________________________________
LLVM Developers mailing list
llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.llvm.org/pipermail/llvm-dev/attachments/20190201/561916a9/attachment.html>