Nicolai Hähnle via llvm-dev
2018-Nov-30 14:24 UTC
[llvm-dev] Question on fast-math optimizations
On 30.11.18 11:49, Heiko Becker via llvm-dev wrote:> --Resending my last mail, as it might have gotten lost -- > > Thanks Nicolai and Steve for the initial replies. > > So if I understand correctly there are 2 places you can pinpoint at > where distributivity is used: > > - simplification of infinity/NaN expressions > > - combination with FMA introductionWell no, my comment also applied to the FMA introduction. Stephen was a bit hesitant about what to call the x * (y + 1) --> x * y + x FMA-introducing transform on the grounds that it superficially only seems to improve the precision at which the expression is evaluated. My point was that this very same transform can introduce very significant, qualitative differences in the result when inf is involved. Cheers, Nicolai> > > @Steve: You mentioned "fast-math flags characterizing when it would be > allowed" so is there a point of reference where it is exactly specified > what fast-math flags allow and what not beyond the llvm documentation > that gives the high-level explanation? > > > Thanks again, > > Heiko > > On 11/22/18 11:16 AM, Heiko Becker via llvm-dev wrote: >> On 11/21/18 12:41 PM, Nicolai Hähnle wrote: >> >>> On 20.11.18 16:38, Stephen Canon via llvm-dev wrote: >>>> Distribution doesn’t seem to be used by many transforms at present. >>>> My vague recollection is that the fast math flags didn’t do a great >>>> job of characterizing when it would be allowed, and using it >>>> aggressively broke a lot of code in practice (code which was >>>> numerical unstable already, but depended on getting the same >>>> unstable results), so people have been gun-shy about using it. Owen >>>> might remember more of the gory details. >>>> >>>> Arguably, it is implicitly used when FMA formation is combined with >>>> fast-math, e.g.: >>>> >>>> float foo(float x, float y) { >>>> return x*(y + 1); >>>> } >>>> >>>> Compiled with -mfma -ffast-math, this generates fma(x, y, x). Even >>>> though this transform superficially appears to use distributivity, >>>> that’s somewhat debatable because the fma computes the whole result >>>> without any intermediate rounding, so it’s pretty wishy-washy to say >>>> that it’s been used here. >>> >>> It most definitely has been used here, because of inf/nan behavior. >>> >>> inf*(0 + 1) == inf >>> inf*0 + inf == nan >>> >>> (I actually fixed this bug in the past because it occurred in practice.) >>> >>> Cheers, >>> Nicolai >>> >>> >>>> >>>> – Steve >>>> >>>>> On Nov 20, 2018, at 9:21 AM, Heiko Becker via llvm-dev >>>>> <llvm-dev at lists.llvm.org> wrote: >>>>> >>>>> Dear LLVM developers, >>>>> >>>>> I have a question on the fast-math floating-point optimizations >>>>> applied by LLVM: >>>>> Judging by the documentation at >>>>> https://llvm.org/docs/LangRef.html#fast-math-flags I understood >>>>> that rewriting with associativity and using reciprocal computations >>>>> are possible optimizations. As the folklore description of >>>>> fast-math is that it "applies real-valued identities", I was >>>>> wondering whether LLVM does also rewrite with distributivity. >>>>> >>>>> If this is the case, could you point me to some specification when >>>>> it is applied? If not, is there any particular reason against >>>>> applying distributivity or whether this just has not been looked >>>>> into so far? >>>>> >>>>> Thank you and best regards, >>>>> >>>>> Heiko >>>>> >>>>> _______________________________________________ >>>>> LLVM Developers mailing list >>>>> llvm-dev at lists.llvm.org >>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >>>> >>>> _______________________________________________ >>>> LLVM Developers mailing list >>>> llvm-dev at lists.llvm.org >>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >>>> >>> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-- Lerne, wie die Welt wirklich ist, Aber vergiss niemals, wie sie sein sollte.
Stephen Canon via llvm-dev
2018-Nov-30 14:34 UTC
[llvm-dev] Question on fast-math optimizations
> On Nov 30, 2018, at 9:24 AM, Nicolai Hähnle via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Stephen was a bit hesitant about what to call the x * (y + 1) --> x * y + x FMA-introducing transform on the grounds that it superficially only seems to improve the precision at which the expression is evaluated.It’s a little bit more subtle than that; because FMA is computed without internal rounding, under an as-if model, you can’t differentiate between fma(x, y, x) and a hypothetical correctly-rounded x*(y + 1), so it doesn’t even make sense to talk about “distributivity” in this context ...> My point was that this very same transform can introduce very significant, qualitative differences in the result when inf is involved.… except with regard to inf/nan edge cases, as you correctly pointed out. =) – Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20181130/576d275a/attachment.html>
Heiko Becker via llvm-dev
2018-Nov-30 15:23 UTC
[llvm-dev] Question on fast-math optimizations
Thank you once again for the further clarifications. I still have one more question: What is the canoncial source for getting a definitive answer on which optimizations are applied when, when allowing fastmath optimizations in LLVM? A pointer to a source file would also be fine. It is just that I tried searching on http://releases.llvm.org/7.0.0/docs/Passes.html and did not find any information there, so I am feeling a bit lost. Thank you, Heiko On 11/30/18 3:34 PM, Stephen Canon via llvm-dev wrote:>> On Nov 30, 2018, at 9:24 AM, Nicolai Hähnle via llvm-dev >> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> >> Stephen was a bit hesitant about what to call the x * (y + 1) --> x * >> y + x FMA-introducing transform on the grounds that it superficially >> only seems to improve the precision at which the expression is evaluated. > > It’s a little bit more subtle than that; because FMA is computed > without internal rounding, under an as-if model, you can’t > differentiate between fma(x, y, x) and a hypothetical > correctly-rounded x*(y + 1), so it doesn’t even make sense to talk > about “distributivity” in this context ... > >> My point was that this very same transform can introduce very >> significant, qualitative differences in the result when inf is involved. > > … except with regard to inf/nan edge cases, as you correctly pointed > out. =) > > – Steve > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20181130/75f11e10/attachment.html>