Roman Lebedev via llvm-dev
2019-Dec-01 22:52 UTC
[llvm-dev] ConstantRange modelling precision?
Hello. This question has come up in https://reviews.llvm.org/D70043 There, i'm teaching ConstantRange how no-wrap flags affect the range of `mul` instruction, with end goal of exploiting this in LVI/CVP. There are certain combinations of ranges and no-wrap flags that result in always-overflowing `mul`. For example, `mul nuw nsw i4 [2,0), [4,0)` always overflows: https://rise4fun.com/Alive/1aK so for such ranges the ideal answer is `empty set`; although it wouldn't be incorrect to return a more pessimistic range (e.g. full-set) that contains more than the ideal result. The problem is, unlike the case of `add`, where intersection between plain `add` range and `saturating-[un?]signed-add` range already returns empty set in similar cases, here we 'need' to model it explicitly. (as it is seen in the patch, the modelling is reasonably straight-forward) As it was pointed out in the review, currently, LVI does not make use of empty-sets, and maps them to `overdefined`: https://godbolt.org/z/N3KggA So the question is: considering the fact that LVI would not make use of such empty-set knowledge, does that mean we shouldn't bother doing that extra analysis in ConstantRange, thus avoiding the compile time cost of said modelling? Right now i'm thinking we *should* be doing it, because: * Wouldn't `overdefined` lattice result in LVI giving up on the users of said value, as opposed to keeping propagating "incorrect" range, and thus incurring maybe more compile time cost, than we would have spent proving that the result is empty-set? * Likely LVI will make use of the knowledge later on? * Are there other users of ConstantRange that may want this precision? Roman.
Philip Reames via llvm-dev
2019-Dec-03 01:30 UTC
[llvm-dev] ConstantRange modelling precision?
On 12/1/19 2:52 PM, Roman Lebedev wrote:> Hello. > > This question has come up in https://reviews.llvm.org/D70043 > There, i'm teaching ConstantRange how no-wrap flags affect > the range of `mul` instruction, with end goal of exploiting > this in LVI/CVP. > > There are certain combinations of ranges and no-wrap flags > that result in always-overflowing `mul`. For example, > `mul nuw nsw i4 [2,0), [4,0)` always overflows: > https://rise4fun.com/Alive/1aK > so for such ranges the ideal answer is `empty set`; > although it wouldn't be incorrect to return a more pessimistic > range (e.g. full-set) that contains more than the ideal result. > > The problem is, unlike the case of `add`, where intersection > between plain `add` range and `saturating-[un?]signed-add` > range already returns empty set in similar cases, here we 'need' > to model it explicitly. (as it is seen in the patch, the modelling > is reasonably straight-forward) > > As it was pointed out in the review, currently, LVI does not > make use of empty-sets, and maps them to `overdefined`: > https://godbolt.org/z/N3KggA > > So the question is: considering the fact that LVI would not > make use of such empty-set knowledge, does that mean we > shouldn't bother doing that extra analysis in ConstantRange, > thus avoiding the compile time cost of said modelling?I wouldn't support that conclusion. I think it should always be fine for LVI to discard precision without making any statements about whether that precision is valuable in the underlying analysis. For instance, are there any other transforms (say, instcombine) which can use the empty set information? It would also seem reasonable to extend LVI with a notion of "empty set" - which on the surface seems to the same as poison. I'm not necessarily saying we need to, or am in a hurry to write the patches. I'm just stating it would be reasonable. One guess here is that it's the similarity to poison which causes us to be conservative in LVI. Given how vague our poison/undef rules are, being aggressive here might expose miscompiles. (Just a guess.)> > Right now i'm thinking we *should* be doing it, because: > * Wouldn't `overdefined` lattice result in LVI giving up > on the users of said value, as opposed to keeping > propagating "incorrect" range, and thus incurring maybe > more compile time cost, than we would have spent > proving that the result is empty-set? > * Likely LVI will make use of the knowledge later on? > * Are there other users of ConstantRange > that may want this precision? > > Roman.
Roman Lebedev via llvm-dev
2019-Dec-03 10:52 UTC
[llvm-dev] ConstantRange modelling precision?
On Tue, Dec 3, 2019 at 4:30 AM Philip Reames <listmail at philipreames.com> wrote:> > > On 12/1/19 2:52 PM, Roman Lebedev wrote: > > Hello. > > > > This question has come up in https://reviews.llvm.org/D70043 > > There, i'm teaching ConstantRange how no-wrap flags affect > > the range of `mul` instruction, with end goal of exploiting > > this in LVI/CVP.(Said patch is still eagerly awaiting review..)> > There are certain combinations of ranges and no-wrap flags > > that result in always-overflowing `mul`. For example, > > `mul nuw nsw i4 [2,0), [4,0)` always overflows: > > https://rise4fun.com/Alive/1aK > > so for such ranges the ideal answer is `empty set`; > > although it wouldn't be incorrect to return a more pessimistic > > range (e.g. full-set) that contains more than the ideal result. > > > > The problem is, unlike the case of `add`, where intersection > > between plain `add` range and `saturating-[un?]signed-add` > > range already returns empty set in similar cases, here we 'need' > > to model it explicitly. (as it is seen in the patch, the modelling > > is reasonably straight-forward) > > > > As it was pointed out in the review, currently, LVI does not > > make use of empty-sets, and maps them to `overdefined`: > > https://godbolt.org/z/N3KggA > > > > So the question is: considering the fact that LVI would not > > make use of such empty-set knowledge, does that mean we > > shouldn't bother doing that extra analysis in ConstantRange, > > thus avoiding the compile time cost of said modelling? > > I wouldn't support that conclusion. I think it should always be fine > for LVI to discard precision without making any statements about whether > that precision is valuable in the underlying analysis.That is my personal view, too.> For instance, > are there any other transforms (say, instcombine) which can use the > empty set information?Presently, only LVI uses `ConstantRange::overflowingBinaryOp()` interface, and SCEV uses `ConstantRange::addWithNoWrap()` specifically. So currently - no, nothing would immediately make use of that info.> It would also seem reasonable to extend LVI with a notion of "empty set" > - which on the surface seems to the same as poison. I'm not necessarily > saying we need to, or am in a hurry to write the patches. I'm just > stating it would be reasonable. > > One guess here is that it's the similarity to poison which causes us to > be conservative in LVI. Given how vague our poison/undef rules are, > being aggressive here might expose miscompiles. (Just a guess.) > > > > > Right now i'm thinking we *should* be doing it, because: > > * Wouldn't `overdefined` lattice result in LVI giving up > > on the users of said value, as opposed to keeping > > propagating "incorrect" range, and thus incurring maybe > > more compile time cost, than we would have spent > > proving that the result is empty-set? > > * Likely LVI will make use of the knowledge later on? > > * Are there other users of ConstantRange > > that may want this precision? > > > > Roman.Roman
Nikita Popov via llvm-dev
2019-Dec-04 15:27 UTC
[llvm-dev] ConstantRange modelling precision?
On Sun, Dec 1, 2019 at 11:52 PM Roman Lebedev <lebedev.ri at gmail.com> wrote:> Hello. > > This question has come up in https://reviews.llvm.org/D70043 > There, i'm teaching ConstantRange how no-wrap flags affect > the range of `mul` instruction, with end goal of exploiting > this in LVI/CVP. > > There are certain combinations of ranges and no-wrap flags > that result in always-overflowing `mul`. For example, > `mul nuw nsw i4 [2,0), [4,0)` always overflows: > https://rise4fun.com/Alive/1aK > so for such ranges the ideal answer is `empty set`; > although it wouldn't be incorrect to return a more pessimistic > range (e.g. full-set) that contains more than the ideal result. > > The problem is, unlike the case of `add`, where intersection > between plain `add` range and `saturating-[un?]signed-add` > range already returns empty set in similar cases, here we 'need' > to model it explicitly. (as it is seen in the patch, the modelling > is reasonably straight-forward) > > As it was pointed out in the review, currently, LVI does not > make use of empty-sets, and maps them to `overdefined`: > https://godbolt.org/z/N3KggA > > So the question is: considering the fact that LVI would not > make use of such empty-set knowledge, does that mean we > shouldn't bother doing that extra analysis in ConstantRange, > thus avoiding the compile time cost of said modelling? > > Right now i'm thinking we *should* be doing it, because: > * Wouldn't `overdefined` lattice result in LVI giving up > on the users of said value, as opposed to keeping > propagating "incorrect" range, and thus incurring maybe > more compile time cost, than we would have spent > proving that the result is empty-set? > * Likely LVI will make use of the knowledge later on? > * Are there other users of ConstantRange > that may want this precision? > > Roman. >To be clear: My objection here is less about computing information that we don't presently need, and more about being precise about one very particular thing, while much more common cases remain imprecise. The entire "xxx with nowrap" code only produces precise ranges if the input ranges are non-wrapping as well, because producing precise ranges for wrapping cases is significantly more complicated and not very common in practice. It does not make sense to me to pick out the case of an empty result set as something that we always want to compute precisely, while still only computing an approximation for the more common and more useful cases where the result is non-empty. If we just happen to get this property for free (as is the case with existing methods), that's fine. But going out of the way to establish this even if it requires significantly more complicated and/or slower code doesn't seem reasonable. Nikita -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191204/6913609d/attachment.html>
Philip Reames via llvm-dev
2019-Dec-04 17:23 UTC
[llvm-dev] ConstantRange modelling precision?
On 12/4/19 7:27 AM, Nikita Popov wrote:> On Sun, Dec 1, 2019 at 11:52 PM Roman Lebedev <lebedev.ri at gmail.com > <mailto:lebedev.ri at gmail.com>> wrote: > > Hello. > > This question has come up in https://reviews.llvm.org/D70043 > There, i'm teaching ConstantRange how no-wrap flags affect > the range of `mul` instruction, with end goal of exploiting > this in LVI/CVP. > > There are certain combinations of ranges and no-wrap flags > that result in always-overflowing `mul`. For example, > `mul nuw nsw i4 [2,0), [4,0)` always overflows: > https://rise4fun.com/Alive/1aK > so for such ranges the ideal answer is `empty set`; > although it wouldn't be incorrect to return a more pessimistic > range (e.g. full-set) that contains more than the ideal result. > > The problem is, unlike the case of `add`, where intersection > between plain `add` range and `saturating-[un?]signed-add` > range already returns empty set in similar cases, here we 'need' > to model it explicitly. (as it is seen in the patch, the modelling > is reasonably straight-forward) > > As it was pointed out in the review, currently, LVI does not > make use of empty-sets, and maps them to `overdefined`: > https://godbolt.org/z/N3KggA > > So the question is: considering the fact that LVI would not > make use of such empty-set knowledge, does that mean we > shouldn't bother doing that extra analysis in ConstantRange, > thus avoiding the compile time cost of said modelling? > > Right now i'm thinking we *should* be doing it, because: > * Wouldn't `overdefined` lattice result in LVI giving up > on the users of said value, as opposed to keeping > propagating "incorrect" range, and thus incurring maybe > more compile time cost, than we would have spent > proving that the result is empty-set? > * Likely LVI will make use of the knowledge later on? > * Are there other users of ConstantRange > that may want this precision? > > Roman. > > > To be clear: My objection here is less about computing information > that we don't presently need, and more about being precise about one > very particular thing, while much more common cases remain imprecise. > The entire "xxx with nowrap" code only produces precise ranges if the > input ranges are non-wrapping as well, because producing precise > ranges for wrapping cases is significantly more complicated and not > very common in practice. > > It does not make sense to me to pick out the case of an empty result > set as something that we always want to compute precisely, while still > only computing an approximation for the more common and more useful > cases where the result is non-empty. If we just happen to get this > property for free (as is the case with existing methods), that's fine. > But going out of the way to establish this even if it requires > significantly more complicated and/or slower code doesn't seem reasonable.I haven't looked at the patch, but to me this whole question comes down to whether the code in ConstantRange is "significantly more complicated and/or slower". Complexity always needs justified. I would not use the fact we miss other common cases as a reason to reject a cornercase though. We'll never be in full agreement on what's common - my workloads don't look like yours. As such, that's a very slippery slope towards paralysis. Slightly OT - Isn't the empty set result here useful for proving overflow (and thus removing x.with.overflow calls)? Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191204/25ea70b9/attachment.html>
Apparently Analagous Threads
- High memory use and LVI/Correlated Value Propagation
- High memory use and LVI/Correlated Value Propagation
- High memory use and LVI/Correlated Value Propagation
- Shift-by-signext - sext is bad for analysis - ignore it's use count?
- High memory use and LVI/Correlated Value Propagation