Hi Ayal, Let me start with commenting on this:> A dedicated intrinsic that freezes the compare instruction, for no apparent reason, may potentially cripple subsequent passes from further optimizing the vectorized loop.The point is we have a very good reason, which is that it passes on the right information on the backend, enabling opimisations as opposed to crippling them. The compare that we are talking is the compare that compares the induction step and the backedge taken count, and this feeds the masked loads/stores. Thus, for example, we are not talking about the compare controlling the backedge, and it is not affecting loop control. While it is undoubtedly true that there could optimisation that can't handle this particular icmp instruction, it is difficult to imagine for me at this point that being unable to analyse this icmp would cripple things.> Could you elaborate on these more complicated cases and the difficulty they entail?The problem that we are solving is that we need the scalar loop backedge taken count (BTC), or just the iteration count, of the original scalar loop for a given vector loop. Just to be clear, we do not only need the vector iteration count, but again also the scalar loop Iteration Count (IC). We need this for a certain form of predication. This information, the scalar loop IC is produced by vectoriser, and is materialised in the form of the instructions that generate the predicates for the masked loads/stores: this icmp with induction step and the scalar IC. Our current approach works for simple cases, because we pattern match the IR, and look for the scalar IC in these icmps that feed masked loads/stores. To make sure we let's say don't accidentally pattern match a random icmp, we compare this with SCEV information. Thus, we have to match up a SCEV expression with pattern matched IR. I could give IR examples, but hopefully it's easy to imagine that this pattern matching and matching up with SCEV info is becoming a bit horrible for doubly nested loops or reductions. This icmp materliased as @llvm.get.active.lanes.mask(%IV, %BTC) avoids all of this, as we can just pick up %BTC in the backend. As we are looking for the scalar loop iteration count, not the VIV, I don't think SCEV for vector loops is going to be helpful. Please let me know if I can elaborate further, or if things are not clear. Cheers, Sjoerd. ________________________________ From: Zaks, Ayal (Mobileye) <ayal.zaks at intel.com> Sent: 20 May 2020 20:39 To: Sjoerd Meijer <Sjoerd.Meijer at arm.com>; Eli Friedman <efriedma at quicinc.com> Cc: llvm-dev at lists.llvm.org <llvm-dev at lists.llvm.org> Subject: RE: [llvm-dev] LV: predication I realize this discussion and D79100 have progressed, sorry, but could we revisit the “simplest path” of deriving the desired number?> This is what we are currently doing and works excellent for simpler cases. For the more complicated cases that we now what to handle as well, the pattern matching just becomes a bit too horrible, and it is fragile too.Could you elaborate on these more complicated cases and the difficulty they entail? Presumably a vector compare of a “Vector Induction Variable” with a broadcasted invariant value is sought, to be RAUW’d by a hardware configured mask. Is it the recognition of VIV’s that’s becoming horrible and fragile? It may be generally useful to have a robust utility and/or analysis that identifies such VIV, effectively extending SCEV to reason about vector values, rather than complicating any backend pass. Middle-end passes may find this information useful too, operating after LV, or on vector IR produced elsewhere. This is somewhat analogous to the argument about relying on a canonical induction variable versus employing SCEV to derive it, http://lists.llvm.org/pipermail/llvm-dev/2020-April/140572.html. A dedicated intrinsic that freezes the compare instruction, for no apparent reason, may potentially cripple subsequent passes from further optimizing the vectorized loop. From: llvm-dev <llvm-dev-bounces at lists.llvm.org> On Behalf Of Sjoerd Meijer via llvm-dev Sent: Friday, May 01, 2020 21:54 To: Eli Friedman <efriedma at quicinc.com>; llvm-dev <llvm-dev at lists.llvm.org> Subject: Re: [llvm-dev] LV: predication Hi Eli,> The problem with your proposal, as written, is that the vectorizer is producing the intrinsic. Because we don’t impose any ordering on optimizations before codegen, every optimization pass in LLVM would have to be taught to preserve any @llvm.set.loop.elements.i32 whenever it makes any change. This is completely impractical because the intrinsic isn’t related to anything optimizations would normally look for: it’s a random intrinsic in the middle of nowhere.I do see that point. But is that also not the beauty of it? It just sits in the preheader, if gets removed, then so be it. And if it not recognised, then also no harm done?> Probably the simplest path to get this working is to derive the number of elements in the backend (in HardwareLoops, or your tail predication pass). You should be able to figure it from the masks used in the llvm.masked.load/store instructions in the loop.This is what we are currently doing and works excellent for simpler cases. For the more complicated cases that we now what to handle as well, the pattern matching just becomes a bit too horrible, and it is fragile too. All we need is the information that the vectoriser already has, and pass this on somehow. As I am really keen to simply our backend pass, would there be another way to pass this information on? If emitting an intrinsic is a blocker, could this be done with a loop annotation? Cheers, Sjoerd. ________________________________ From: Eli Friedman <efriedma at quicinc.com<mailto:efriedma at quicinc.com>> Sent: 01 May 2020 19:30 To: Sjoerd Meijer <Sjoerd.Meijer at arm.com<mailto:Sjoerd.Meijer at arm.com>>; llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> Subject: RE: [llvm-dev] LV: predication The problem with your proposal, as written, is that the vectorizer is producing the intrinsic. Because we don’t impose any ordering on optimizations before codegen, every optimization pass in LLVM would have to be taught to preserve any @llvm.set.loop.elements.i32 whenever it makes any change. This is completely impractical because the intrinsic isn’t related to anything optimizations would normally look for: it’s a random intrinsic in the middle of nowhere. Probably the simplest path to get this working is to derive the number of elements in the backend (in HardwareLoops, or your tail predication pass). You should be able to figure it from the masks used in the llvm.masked.load/store instructions in the loop. -Eli From: llvm-dev <llvm-dev-bounces at lists.llvm.org<mailto:llvm-dev-bounces at lists.llvm.org>> On Behalf Of Sjoerd Meijer via llvm-dev Sent: Friday, May 1, 2020 3:50 AM To: llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> Subject: [EXT] [llvm-dev] LV: predication Hello, We are working on predication for our vector extension (MVE). Since quite a few people are working on predication and different forms of it (e.g. SVE, RISC-V, NEC), I thought I would share what we would like to add to the loop vectoriser. Hopefully it's just a minor one and not intrusive, but could be interesting and useful for others, and feedback on this is welcome of course. TL;DR: We would like the loop vectoriser to emit a new IR intrinsic for certain loops: void @llvm.set.loop.elements.i32(i32 ) This represents the number of data elements processed by a vector loop, and will be emitted in the preheader block of the vector loop after querying TTI that the backend understands this intrinsic and that it should be emitted for that loop. The vectoriser patch is available in D79100, and we pick this intrinsic up in the ARM backend here in D79175. Context: We are working on predication form that we call tail-predication: a vector hardwareloop has an implicit form of predication that sets active/inactive lanes for the last iteration of the vector loop. Thus, the scalar epilogue loop (if there is one) is tail-folded and tail-predicated in the main vector body. And to support this, we need to know the number of data elements processed by the loop, which is used in the set up of a tail-predicated vector loop. This new intrinsic communicates this information from the vectoriser to the codegen passes where we further lower these loops. In our case, we essentially let @llvm.set.loop.elements.i32 emit the trip count of the scalar loop, which represents the number of data elements processed. Thus, we let the vectoriser emits both the scalar and vector loop trip count. Although in a different stage in the optimisation pipeline, this is exactly what the generic HardwareLoop pass is doing to communicate its information to target specific codegen passes; it emits a few intrinsics to mark a hardware loop. To illustrate this and also the new intrinsic, this is the flow and life of a tail-predicated vector loop using some heavily edited/reduced examples. First, the vectoriser emits the number of elements processed, and the loads/stores are masked because tail-folding is applied: vector.ph: call void @llvm.set.loop.elements.i32(i32 %N) br label %vector.body vector.body: call <4 x i32> @llvm.masked.load call <4 x i32> @llvm.masked.load call void @llvm.masked.store br i1 %12, label %.*, label %vector.body After the HardwareLoop pass this is transformed into this, which adds the hardware loop intrinsics: vector.ph: call void @llvm.set.loop.elements.i32(i32 %N) call void @llvm.set.loop.iterations.i32(i32 %5) br label %vector.body vector.body: call <4 x i32> @llvm.masked.load call <4 x i32> @llvm.masked.load call void @llvm.masked.store call i32 @llvm.loop.decrement.reg br i1 %12, label %.*, label %vector.body We then pick this up in our tail-predication pass, remove @llvm.set.loop.elements intrinsic, and add @vctp which is our intrinsic that generates the mask of active/inactive lanes: vector.ph: call void @llvm.set.loop.iterations.i32(i32 %5) br label %vector.body vector.body: call <4 x i1> @llvm.arm.mve.vctp32 call <4 x i32> @llvm.masked.load call <4 x i32> @llvm.masked.load call void @llvm.masked.store call i32 @llvm.loop.decrement.reg br i1 %12, label %.*, label %vector.body And this is then further lowered to a tail-predicted loop, or reverted to a 'normal' vector loop if some restrictions are not met. Cheers, Sjoerd. --------------------------------------------------------------------- Intel Israel (74) Limited This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200520/e432e135/attachment.html>
> The compare that we are talking is the compare that compares the induction step and the backedge taken count, and this feeds the masked loads/stores.The compare of interest is clear, I think. It compares a Vector Induction Variable with a broadcasted loop invariant value, aka the BTC. Obtaining the latter operand is the goal, clearly, but to do so, the former operand needs to be recognized as a VIV. What if this compare is not generated by LV's fold-tail-by-masking transformation? For example, if a programmer folds the tail manually, by doing for (int i=0; i<1000; i++) { if (i<998) { ... } } which LV then vectorizes, generating the desired compare, treating it like any other compare, w/o recognizing that it's the desired one? Or what if a vectorized loop with the desired compare originates from some front-end, such as OpenCL. Presumably, in all such cases this compare is still/equally of interest. If you can elaborate on the complicated cases and the difficulty they entail, possibly with concrete IR examples, a general robust way of recognizing the desired compare could perhaps be found. How are doubly nested loops or reductions related here? From: Sjoerd Meijer <Sjoerd.Meijer at arm.com> Sent: Thursday, May 21, 2020 01:17 To: Zaks, Ayal (Mobileye) <ayal.zaks at intel.com>; Eli Friedman <efriedma at quicinc.com> Cc: llvm-dev at lists.llvm.org Subject: Re: [llvm-dev] LV: predication Hi Ayal, Let me start with commenting on this:> A dedicated intrinsic that freezes the compare instruction, for no apparent reason, may potentially cripple subsequent passes from further optimizing the vectorized loop.The point is we have a very good reason, which is that it passes on the right information on the backend, enabling opimisations as opposed to crippling them. The compare that we are talking is the compare that compares the induction step and the backedge taken count, and this feeds the masked loads/stores. Thus, for example, we are not talking about the compare controlling the backedge, and it is not affecting loop control. While it is undoubtedly true that there could optimisation that can't handle this particular icmp instruction, it is difficult to imagine for me at this point that being unable to analyse this icmp would cripple things.> Could you elaborate on these more complicated cases and the difficulty they entail?The problem that we are solving is that we need the scalar loop backedge taken count (BTC), or just the iteration count, of the original scalar loop for a given vector loop. Just to be clear, we do not only need the vector iteration count, but again also the scalar loop Iteration Count (IC). We need this for a certain form of predication. This information, the scalar loop IC is produced by vectoriser, and is materialised in the form of the instructions that generate the predicates for the masked loads/stores: this icmp with induction step and the scalar IC. Our current approach works for simple cases, because we pattern match the IR, and look for the scalar IC in these icmps that feed masked loads/stores. To make sure we let's say don't accidentally pattern match a random icmp, we compare this with SCEV information. Thus, we have to match up a SCEV expression with pattern matched IR. I could give IR examples, but hopefully it's easy to imagine that this pattern matching and matching up with SCEV info is becoming a bit horrible for doubly nested loops or reductions. This icmp materliased as @llvm.get.active.lanes.mask(%IV, %BTC) avoids all of this, as we can just pick up %BTC in the backend. As we are looking for the scalar loop iteration count, not the VIV, I don't think SCEV for vector loops is going to be helpful. Please let me know if I can elaborate further, or if things are not clear. Cheers, Sjoerd. ________________________________ From: Zaks, Ayal (Mobileye) <ayal.zaks at intel.com<mailto:ayal.zaks at intel.com>> Sent: 20 May 2020 20:39 To: Sjoerd Meijer <Sjoerd.Meijer at arm.com<mailto:Sjoerd.Meijer at arm.com>>; Eli Friedman <efriedma at quicinc.com<mailto:efriedma at quicinc.com>> Cc: llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> Subject: RE: [llvm-dev] LV: predication I realize this discussion and D79100 have progressed, sorry, but could we revisit the "simplest path" of deriving the desired number?> This is what we are currently doing and works excellent for simpler cases. For the more complicated cases that we now what to handle as well, the pattern matching just becomes a bit too horrible, and it is fragile too.Could you elaborate on these more complicated cases and the difficulty they entail? Presumably a vector compare of a "Vector Induction Variable" with a broadcasted invariant value is sought, to be RAUW'd by a hardware configured mask. Is it the recognition of VIV's that's becoming horrible and fragile? It may be generally useful to have a robust utility and/or analysis that identifies such VIV, effectively extending SCEV to reason about vector values, rather than complicating any backend pass. Middle-end passes may find this information useful too, operating after LV, or on vector IR produced elsewhere. This is somewhat analogous to the argument about relying on a canonical induction variable versus employing SCEV to derive it, http://lists.llvm.org/pipermail/llvm-dev/2020-April/140572.html. A dedicated intrinsic that freezes the compare instruction, for no apparent reason, may potentially cripple subsequent passes from further optimizing the vectorized loop. From: llvm-dev <llvm-dev-bounces at lists.llvm.org<mailto:llvm-dev-bounces at lists.llvm.org>> On Behalf Of Sjoerd Meijer via llvm-dev Sent: Friday, May 01, 2020 21:54 To: Eli Friedman <efriedma at quicinc.com<mailto:efriedma at quicinc.com>>; llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> Subject: Re: [llvm-dev] LV: predication Hi Eli,> The problem with your proposal, as written, is that the vectorizer is producing the intrinsic. Because we don't impose any ordering on optimizations before codegen, every optimization pass in LLVM would have to be taught to preserve any @llvm.set.loop.elements.i32 whenever it makes any change. This is completely impractical because the intrinsic isn't related to anything optimizations would normally look for: it's a random intrinsic in the middle of nowhere.I do see that point. But is that also not the beauty of it? It just sits in the preheader, if gets removed, then so be it. And if it not recognised, then also no harm done?> Probably the simplest path to get this working is to derive the number of elements in the backend (in HardwareLoops, or your tail predication pass). You should be able to figure it from the masks used in the llvm.masked.load/store instructions in the loop.This is what we are currently doing and works excellent for simpler cases. For the more complicated cases that we now what to handle as well, the pattern matching just becomes a bit too horrible, and it is fragile too. All we need is the information that the vectoriser already has, and pass this on somehow. As I am really keen to simply our backend pass, would there be another way to pass this information on? If emitting an intrinsic is a blocker, could this be done with a loop annotation? Cheers, Sjoerd. ________________________________ From: Eli Friedman <efriedma at quicinc.com<mailto:efriedma at quicinc.com>> Sent: 01 May 2020 19:30 To: Sjoerd Meijer <Sjoerd.Meijer at arm.com<mailto:Sjoerd.Meijer at arm.com>>; llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> Subject: RE: [llvm-dev] LV: predication The problem with your proposal, as written, is that the vectorizer is producing the intrinsic. Because we don't impose any ordering on optimizations before codegen, every optimization pass in LLVM would have to be taught to preserve any @llvm.set.loop.elements.i32 whenever it makes any change. This is completely impractical because the intrinsic isn't related to anything optimizations would normally look for: it's a random intrinsic in the middle of nowhere. Probably the simplest path to get this working is to derive the number of elements in the backend (in HardwareLoops, or your tail predication pass). You should be able to figure it from the masks used in the llvm.masked.load/store instructions in the loop. -Eli From: llvm-dev <llvm-dev-bounces at lists.llvm.org<mailto:llvm-dev-bounces at lists.llvm.org>> On Behalf Of Sjoerd Meijer via llvm-dev Sent: Friday, May 1, 2020 3:50 AM To: llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> Subject: [EXT] [llvm-dev] LV: predication Hello, We are working on predication for our vector extension (MVE). Since quite a few people are working on predication and different forms of it (e.g. SVE, RISC-V, NEC), I thought I would share what we would like to add to the loop vectoriser. Hopefully it's just a minor one and not intrusive, but could be interesting and useful for others, and feedback on this is welcome of course. TL;DR: We would like the loop vectoriser to emit a new IR intrinsic for certain loops: void @llvm.set.loop.elements.i32(i32 ) This represents the number of data elements processed by a vector loop, and will be emitted in the preheader block of the vector loop after querying TTI that the backend understands this intrinsic and that it should be emitted for that loop. The vectoriser patch is available in D79100, and we pick this intrinsic up in the ARM backend here in D79175. Context: We are working on predication form that we call tail-predication: a vector hardwareloop has an implicit form of predication that sets active/inactive lanes for the last iteration of the vector loop. Thus, the scalar epilogue loop (if there is one) is tail-folded and tail-predicated in the main vector body. And to support this, we need to know the number of data elements processed by the loop, which is used in the set up of a tail-predicated vector loop. This new intrinsic communicates this information from the vectoriser to the codegen passes where we further lower these loops. In our case, we essentially let @llvm.set.loop.elements.i32 emit the trip count of the scalar loop, which represents the number of data elements processed. Thus, we let the vectoriser emits both the scalar and vector loop trip count. Although in a different stage in the optimisation pipeline, this is exactly what the generic HardwareLoop pass is doing to communicate its information to target specific codegen passes; it emits a few intrinsics to mark a hardware loop. To illustrate this and also the new intrinsic, this is the flow and life of a tail-predicated vector loop using some heavily edited/reduced examples. First, the vectoriser emits the number of elements processed, and the loads/stores are masked because tail-folding is applied: vector.ph: call void @llvm.set.loop.elements.i32(i32 %N) br label %vector.body vector.body: call <4 x i32> @llvm.masked.load call <4 x i32> @llvm.masked.load call void @llvm.masked.store br i1 %12, label %.*, label %vector.body After the HardwareLoop pass this is transformed into this, which adds the hardware loop intrinsics: vector.ph: call void @llvm.set.loop.elements.i32(i32 %N) call void @llvm.set.loop.iterations.i32(i32 %5) br label %vector.body vector.body: call <4 x i32> @llvm.masked.load call <4 x i32> @llvm.masked.load call void @llvm.masked.store call i32 @llvm.loop.decrement.reg br i1 %12, label %.*, label %vector.body We then pick this up in our tail-predication pass, remove @llvm.set.loop.elements intrinsic, and add @vctp which is our intrinsic that generates the mask of active/inactive lanes: vector.ph: call void @llvm.set.loop.iterations.i32(i32 %5) br label %vector.body vector.body: call <4 x i1> @llvm.arm.mve.vctp32 call <4 x i32> @llvm.masked.load call <4 x i32> @llvm.masked.load call void @llvm.masked.store call i32 @llvm.loop.decrement.reg br i1 %12, label %.*, label %vector.body And this is then further lowered to a tail-predicted loop, or reverted to a 'normal' vector loop if some restrictions are not met. Cheers, Sjoerd. --------------------------------------------------------------------- Intel Israel (74) Limited This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200521/3955f264/attachment.html>
> The compare of interest is clear, I think. It compares a Vector Induction Variable with a broadcasted loop invariant value, aka the BTC. Obtaining the latter operand is the goal, clearly, but to do so, the former operand needs to be recognized as a VIV.Yep, exactly that.> What if this compare is not generated by LV’s fold-tail-by-masking transformation?Not sure I completely follow this, because the whole point is that @llvm.get.active.lane.mask() will be generated by the LV and its tail-folding and masking. If a programmer is interfering and doing a manual fold, it is okay if the LV tail folding doesn't get triggered (bad luck), but of course we shouldn't be emitting it incorrectly. I feel that this is a slightly different discussion than yesterday. I.e., we don't need to discuss correctness because that should always be the case, and if we generate incorrect code than that is a problem, but that is not a problem related to the proposed @llvm.get.active.lane.mask() intrinsic. But quickly checking the example we have: %vec.ind = phi <4 x i32> [ <i32 0, i32 1, i32 2, i32 3>, %vector.ph ], [ %vec.ind.next, %vector.body ] %4 = icmp ult <4 x i32> %vec.ind, <i32 998, i32 998, i32 998, i32 998> and everything looks good here. Using the change D79100, we don't emit @llvm.get.active.lane.mask() because we don't exactly have the case VIV <= BTC, here VIV is a vector phi and looks like this follows a different code path in the vectoriser. This is definitely a case we want to support too, but that's a different story. A previous concern was this inhibiting other things, but I don't see that. What we are changing is this original icmp: %active.lane.mask =<4 x i1> icmp ult <4 x i32> %induction, <4 x i32> %broadcast.splat %wide.masked.load = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %3, i32 4, <4 x i1> %active.lane.mask, <4 x i32> undef) with this: %active.lane.mask = call <4 x i1> @llvm.get.active.lane.mask.v4i32(<4 x i32> %induction, <4 x i32> %broadcast.splat) %wide.masked.load = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %3, i32 4, <4 x i1> %active.lane.mask, <4 x i32> undef) This @llvm.get.active.lane.mask() has very straightforward semantics, it is just a simple translation of the icmp, and also importantly, it is feeding another intrinsic: the masked load/store. As side node, after Simon's feedback we will change/simplify @llvm.get.active.lane.mask(), and let it operand on 2 scalar operands (the first induction value, and the scalar BTC, not splat), because that would help other cases too, but none of this changes anything what I mentioned before, but could even simplify things more.> If you can elaborate on the complicated cases and the difficulty they entail, possibly with concrete IR examples, a general robust way of recognizing the desired compare could perhaps be found. How are doubly nested loops or reductions related here?Our pattern match based approach breaks down for the following kind of examples. They are real-life filters and codes that we are looking at, and this one is slightly simplified example of that: for (i = 0; i < N; i++) { Sum = 0; M = Size - i; for (j = 0; j < M; j++) Sum += Input[j] * Input[j+i]; Output[i] = Sum; } We are vectorising the inner-loop and we need to know its BTC. Its loop upperbound M depends on outerloop i, which results in a recursive SCEV expression. %trip.count.minus.1 = sub i32 %1, 1 %broadcast.splatinsert = insertelement <4 x i32> undef, i32 %trip.count.minus.1, i32 0 %broadcast.splat = shufflevector <4 x i32> %broadcast.splatinsert, <4 x i32> undef, <4 x i32> zeroinitializer br label %vector.body vector.body: .. %4 = icmp ule <4 x i32> %induction, %broadcast.splat call <4 x i16> @llvm.masked.load.v4i16.p0v4i16(<4 x i16>* %6, i32 2, <4 x i1> %4, Starting pattern matching from the icmp and %broadcast.splat, which is the BTC, we find %trip.count.minus.1 as the inner loop trip count, but we need to be certain about this. This is where we use SCEV, and we check whether the pattern matched IR definition matches with SCEV information. For simple 1d loops, the SCEV expression for the trip count has this general form: (1 + ((-4 + (4 * ((3 + %Elems) /u 4))<nuw>) /u 4))<nuw><nsw> where %Elems is the scalar trip count. For the example above that means if %Elems == %trip.count.minus.1, we know that we found the right trip count, and that's what we will use. But for this 2d example (1 + ((-4 + (4 * ({(3 + %S),+,-1}<nw> /u 4))<nuw>) /u 4))<nuw><nsw> First of all, this is a recursive SCEV, but problematic is that %S simply doesn't correspond to our inner loop trip count. We could still get it, but has all the disadvantages I mentioned earlier already, and it is just a bit horrible, so we need something robust, and a simple way of passing this information on from the vectoriser to the backend. I still might have skipped a few details here, but this is what it boils down to, and hopefully you've got a good impression of the problem. Cheers, Sjoerd. ________________________________ From: Zaks, Ayal (Mobileye) <ayal.zaks at intel.com> Sent: 21 May 2020 18:44 To: Sjoerd Meijer <Sjoerd.Meijer at arm.com>; Eli Friedman <efriedma at quicinc.com> Cc: llvm-dev at lists.llvm.org <llvm-dev at lists.llvm.org> Subject: RE: [llvm-dev] LV: predication> The compare that we are talking is the compare that compares the induction step and the backedge taken count, and this feeds the masked loads/stores.The compare of interest is clear, I think. It compares a Vector Induction Variable with a broadcasted loop invariant value, aka the BTC. Obtaining the latter operand is the goal, clearly, but to do so, the former operand needs to be recognized as a VIV. What if this compare is not generated by LV’s fold-tail-by-masking transformation? For example, if a programmer folds the tail manually, by doing for (int i=0; i<1000; i++) { if (i<998) { … } } which LV then vectorizes, generating the desired compare, treating it like any other compare, w/o recognizing that it’s the desired one? Or what if a vectorized loop with the desired compare originates from some front-end, such as OpenCL. Presumably, in all such cases this compare is still/equally of interest. If you can elaborate on the complicated cases and the difficulty they entail, possibly with concrete IR examples, a general robust way of recognizing the desired compare could perhaps be found. How are doubly nested loops or reductions related here? From: Sjoerd Meijer <Sjoerd.Meijer at arm.com> Sent: Thursday, May 21, 2020 01:17 To: Zaks, Ayal (Mobileye) <ayal.zaks at intel.com>; Eli Friedman <efriedma at quicinc.com> Cc: llvm-dev at lists.llvm.org Subject: Re: [llvm-dev] LV: predication Hi Ayal, Let me start with commenting on this:> A dedicated intrinsic that freezes the compare instruction, for no apparent reason, may potentially cripple subsequent passes from further optimizing the vectorized loop.The point is we have a very good reason, which is that it passes on the right information on the backend, enabling opimisations as opposed to crippling them. The compare that we are talking is the compare that compares the induction step and the backedge taken count, and this feeds the masked loads/stores. Thus, for example, we are not talking about the compare controlling the backedge, and it is not affecting loop control. While it is undoubtedly true that there could optimisation that can't handle this particular icmp instruction, it is difficult to imagine for me at this point that being unable to analyse this icmp would cripple things.> Could you elaborate on these more complicated cases and the difficulty they entail?The problem that we are solving is that we need the scalar loop backedge taken count (BTC), or just the iteration count, of the original scalar loop for a given vector loop. Just to be clear, we do not only need the vector iteration count, but again also the scalar loop Iteration Count (IC). We need this for a certain form of predication. This information, the scalar loop IC is produced by vectoriser, and is materialised in the form of the instructions that generate the predicates for the masked loads/stores: this icmp with induction step and the scalar IC. Our current approach works for simple cases, because we pattern match the IR, and look for the scalar IC in these icmps that feed masked loads/stores. To make sure we let's say don't accidentally pattern match a random icmp, we compare this with SCEV information. Thus, we have to match up a SCEV expression with pattern matched IR. I could give IR examples, but hopefully it's easy to imagine that this pattern matching and matching up with SCEV info is becoming a bit horrible for doubly nested loops or reductions. This icmp materliased as @llvm.get.active.lanes.mask(%IV, %BTC) avoids all of this, as we can just pick up %BTC in the backend. As we are looking for the scalar loop iteration count, not the VIV, I don't think SCEV for vector loops is going to be helpful. Please let me know if I can elaborate further, or if things are not clear. Cheers, Sjoerd. ________________________________ From: Zaks, Ayal (Mobileye) <ayal.zaks at intel.com<mailto:ayal.zaks at intel.com>> Sent: 20 May 2020 20:39 To: Sjoerd Meijer <Sjoerd.Meijer at arm.com<mailto:Sjoerd.Meijer at arm.com>>; Eli Friedman <efriedma at quicinc.com<mailto:efriedma at quicinc.com>> Cc: llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> Subject: RE: [llvm-dev] LV: predication I realize this discussion and D79100 have progressed, sorry, but could we revisit the “simplest path” of deriving the desired number?> This is what we are currently doing and works excellent for simpler cases. For the more complicated cases that we now what to handle as well, the pattern matching just becomes a bit too horrible, and it is fragile too.Could you elaborate on these more complicated cases and the difficulty they entail? Presumably a vector compare of a “Vector Induction Variable” with a broadcasted invariant value is sought, to be RAUW’d by a hardware configured mask. Is it the recognition of VIV’s that’s becoming horrible and fragile? It may be generally useful to have a robust utility and/or analysis that identifies such VIV, effectively extending SCEV to reason about vector values, rather than complicating any backend pass. Middle-end passes may find this information useful too, operating after LV, or on vector IR produced elsewhere. This is somewhat analogous to the argument about relying on a canonical induction variable versus employing SCEV to derive it, http://lists.llvm.org/pipermail/llvm-dev/2020-April/140572.html. A dedicated intrinsic that freezes the compare instruction, for no apparent reason, may potentially cripple subsequent passes from further optimizing the vectorized loop. From: llvm-dev <llvm-dev-bounces at lists.llvm.org<mailto:llvm-dev-bounces at lists.llvm.org>> On Behalf Of Sjoerd Meijer via llvm-dev Sent: Friday, May 01, 2020 21:54 To: Eli Friedman <efriedma at quicinc.com<mailto:efriedma at quicinc.com>>; llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> Subject: Re: [llvm-dev] LV: predication Hi Eli,> The problem with your proposal, as written, is that the vectorizer is producing the intrinsic. Because we don’t impose any ordering on optimizations before codegen, every optimization pass in LLVM would have to be taught to preserve any @llvm.set.loop.elements.i32 whenever it makes any change. This is completely impractical because the intrinsic isn’t related to anything optimizations would normally look for: it’s a random intrinsic in the middle of nowhere.I do see that point. But is that also not the beauty of it? It just sits in the preheader, if gets removed, then so be it. And if it not recognised, then also no harm done?> Probably the simplest path to get this working is to derive the number of elements in the backend (in HardwareLoops, or your tail predication pass). You should be able to figure it from the masks used in the llvm.masked.load/store instructions in the loop.This is what we are currently doing and works excellent for simpler cases. For the more complicated cases that we now what to handle as well, the pattern matching just becomes a bit too horrible, and it is fragile too. All we need is the information that the vectoriser already has, and pass this on somehow. As I am really keen to simply our backend pass, would there be another way to pass this information on? If emitting an intrinsic is a blocker, could this be done with a loop annotation? Cheers, Sjoerd. ________________________________ From: Eli Friedman <efriedma at quicinc.com<mailto:efriedma at quicinc.com>> Sent: 01 May 2020 19:30 To: Sjoerd Meijer <Sjoerd.Meijer at arm.com<mailto:Sjoerd.Meijer at arm.com>>; llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> Subject: RE: [llvm-dev] LV: predication The problem with your proposal, as written, is that the vectorizer is producing the intrinsic. Because we don’t impose any ordering on optimizations before codegen, every optimization pass in LLVM would have to be taught to preserve any @llvm.set.loop.elements.i32 whenever it makes any change. This is completely impractical because the intrinsic isn’t related to anything optimizations would normally look for: it’s a random intrinsic in the middle of nowhere. Probably the simplest path to get this working is to derive the number of elements in the backend (in HardwareLoops, or your tail predication pass). You should be able to figure it from the masks used in the llvm.masked.load/store instructions in the loop. -Eli From: llvm-dev <llvm-dev-bounces at lists.llvm.org<mailto:llvm-dev-bounces at lists.llvm.org>> On Behalf Of Sjoerd Meijer via llvm-dev Sent: Friday, May 1, 2020 3:50 AM To: llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> Subject: [EXT] [llvm-dev] LV: predication Hello, We are working on predication for our vector extension (MVE). Since quite a few people are working on predication and different forms of it (e.g. SVE, RISC-V, NEC), I thought I would share what we would like to add to the loop vectoriser. Hopefully it's just a minor one and not intrusive, but could be interesting and useful for others, and feedback on this is welcome of course. TL;DR: We would like the loop vectoriser to emit a new IR intrinsic for certain loops: void @llvm.set.loop.elements.i32(i32 ) This represents the number of data elements processed by a vector loop, and will be emitted in the preheader block of the vector loop after querying TTI that the backend understands this intrinsic and that it should be emitted for that loop. The vectoriser patch is available in D79100, and we pick this intrinsic up in the ARM backend here in D79175. Context: We are working on predication form that we call tail-predication: a vector hardwareloop has an implicit form of predication that sets active/inactive lanes for the last iteration of the vector loop. Thus, the scalar epilogue loop (if there is one) is tail-folded and tail-predicated in the main vector body. And to support this, we need to know the number of data elements processed by the loop, which is used in the set up of a tail-predicated vector loop. This new intrinsic communicates this information from the vectoriser to the codegen passes where we further lower these loops. In our case, we essentially let @llvm.set.loop.elements.i32 emit the trip count of the scalar loop, which represents the number of data elements processed. Thus, we let the vectoriser emits both the scalar and vector loop trip count. Although in a different stage in the optimisation pipeline, this is exactly what the generic HardwareLoop pass is doing to communicate its information to target specific codegen passes; it emits a few intrinsics to mark a hardware loop. To illustrate this and also the new intrinsic, this is the flow and life of a tail-predicated vector loop using some heavily edited/reduced examples. First, the vectoriser emits the number of elements processed, and the loads/stores are masked because tail-folding is applied: vector.ph: call void @llvm.set.loop.elements.i32(i32 %N) br label %vector.body vector.body: call <4 x i32> @llvm.masked.load call <4 x i32> @llvm.masked.load call void @llvm.masked.store br i1 %12, label %.*, label %vector.body After the HardwareLoop pass this is transformed into this, which adds the hardware loop intrinsics: vector.ph: call void @llvm.set.loop.elements.i32(i32 %N) call void @llvm.set.loop.iterations.i32(i32 %5) br label %vector.body vector.body: call <4 x i32> @llvm.masked.load call <4 x i32> @llvm.masked.load call void @llvm.masked.store call i32 @llvm.loop.decrement.reg br i1 %12, label %.*, label %vector.body We then pick this up in our tail-predication pass, remove @llvm.set.loop.elements intrinsic, and add @vctp which is our intrinsic that generates the mask of active/inactive lanes: vector.ph: call void @llvm.set.loop.iterations.i32(i32 %5) br label %vector.body vector.body: call <4 x i1> @llvm.arm.mve.vctp32 call <4 x i32> @llvm.masked.load call <4 x i32> @llvm.masked.load call void @llvm.masked.store call i32 @llvm.loop.decrement.reg br i1 %12, label %.*, label %vector.body And this is then further lowered to a tail-predicted loop, or reverted to a 'normal' vector loop if some restrictions are not met. Cheers, Sjoerd. --------------------------------------------------------------------- Intel Israel (74) Limited This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200521/9328352b/attachment-0001.html>