+1 I strongly question the wisdom of trying to invent our own floating point representation. IEEE 754 is, honestly, pretty damn well designed. Unless we all go spend a few years studying numerical analysis, we’re not going to design something here that is better, and are likely to get caught in numerical traps that using IEEE floating point would have protected us from. That said, I’m sympathetic to the desire for a good, convenient soft-float library. I think the correct solution is to *make APFloat not suck*, rather than inventing something new. —Owen On Jun 18, 2014, at 9:59 AM, Philip Reames <listmail at philipreames.com> wrote:> In concept, I have no problems with having a soft float implementation in tree. I agree with your points regarding platform independence and incrementalism. If you wanted to use an IEEE compliant implementation (possibly with a few restrictions, e.g. rounding mode, trap on bounds violations, etc..), I'd be fine with that. > > I'm concerned by the proposed semantics mentioned so far. Without very strong justification and a very carefully written specification, I would resist any alternate semantics. My experience has been that floating point is already confusing enough and that getting any floating point implementation "right" (both implementation *and* usable semantics) is a very hard problem. I don't believe that we have either the resources or the interest in solving that problem, and that a partial solution is worse than nothing at all. > > Purely out of curiosity, for prototyping do we actually want floating point? Or would a rational implementation be better? I'm not familiar with these areas, so I can't really judge. > > Philip > > > On 06/18/2014 08:34 AM, Duncan P. N. Exon Smith wrote: >>> On 2014 Jun 17, at 21:59, Owen Anderson <resistor at mac.com> wrote: >>> >>> Hi Duncan, >>> >>> Some of these don’t make a lot of sense: >> Sorry -- I think I was assuming too much knowledge about what I committed as >> part of the BlockFrequencyInfo rewrite. >> >> What's committed there is a class called UnsignedFloat that wraps the >> following with a bunch of API: >> >> template <class UIntT> struct UnsignedFloat { >> UIntT Digits; >> uint16_t Exponent; >> }; >> >> There are some static asserts to restrict UIntT to either `uint32_t` or >> `uint64_t`. I have tests that are out of tree. The `uint32_t` version uses >> 64-bit math for divide and multiply while the `uint64_t` version uses long >> division etc. -- otherwise they share implementation. They both defer to >> APFloat for non-trivial string conversion. >> >> I don't think it will be much work to clean this up and create a SignedFloat >> variant that adds a `bool` for the sign (and shares most of the impl). >> >> I'll respond to your specific points inline. >> >>>> - Easy to use and reason about (unlike `APFloat`). >>>> - Uses operators. >>>> - No special numbers. >>> What semantics do you propose to use instead? At some point, people will hit the boundaries of their range, and you need to do something sensible there. >> Simple saturation. >> >>>> - Every operation well-defined (even divide-by-zero). >>> Divide-by-zero is actually well-defined in IEEE 754: >>> x / +0.0 == +Inf >>> x / -0.0 == -Inf >>> (-)0.0 / (-)0.0 == NaN >> Point taken! >> >>>> - No rounding modes. >>> You can’t implement a finite precision floating point representation without any rounding. I assume what you mean here is only one rounding mode, i.e. round-nearest-ties-to-even. >> Yes. I was emphasizing that rounding modes aren't part of the API. >> >>>> - Digits represented simply as a 32-bit or 64-bit integer. >>> Isn’t this the same as the significand of an IEEE float? If you go with 64-bit, it sounds like you’re defining something very close to Intel’s FP80. >> Yup. The `uint64_t` version is similar to a non-conforming and slow FP80 >> that's always in denormal mode, but the *same* non-conforming on every >> platform. >> _______________________________________________ >> LLVM Developers mailing list >> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu >> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >
Duncan P. N. Exon Smith
2014-Jun-18 20:20 UTC
[LLVMdev] [RFC] Add a simple soft-float class
> On 2014 Jun 18, at 10:09, Owen Anderson <resistor at mac.com> wrote: > > +1 > > I strongly question the wisdom of trying to invent our own floating point representation. IEEE 754 is, honestly, pretty damn well designed. Unless we all go spend a few years studying numerical analysis, we’re not going to design something here that is better, and are likely to get caught in numerical traps that using IEEE floating point would have protected us from. > > That said, I’m sympathetic to the desire for a good, convenient soft-float library. I think the correct solution is to *make APFloat not suck*, rather than inventing something new. > > —OwenI'm certainly not suggesting this would be better in general than IEEE 754. But I think it's suitable for the sorts of places we currently use hard-floats. I guess you (and Philip) are saying there are dragons here? Although "making APFloat not suck" might be ideal, we don't have the code written for that.
On Jun 18, 2014, at 1:20 PM, Duncan P. N. Exon Smith <dexonsmith at apple.com> wrote:> I'm certainly not suggesting this would be better in general than IEEE 754. > > But I think it's suitable for the sorts of places we currently use > hard-floats. I guess you (and Philip) are saying there are dragons here?Numerical analysis is hard. Every numerics expert I have ever worked with considers trying to re-invent floating point a cardinal sin of numerical analysis. Just don’t do it. You will miss important considerations, and you will pay the price for it later. Plus, anyone in the future who wants to modify your code has to learn a new set of non-standard floating point numerics, and without a well-defined specification it’s not always clear what the correct semantics in a particular case are.> Although "making APFloat not suck" might be ideal, we don't have the code > written for that.I don’t think we should let expedience get in the way of Doing The Right Thing. IMO, there are two real issues with APFloat today: 1) The interface is really clunky. We can fix this by fixing/extending the interface, or by adding a convenience wrapper. 2) It *may* be too slow for this use case. Assuming this is actually true, there’s a lot of room for improvement in APFloat’s performance by special-casing common paths (default rounding mode, normal formats). We could even conceivably detect if we’re compiled for a platform that has sane hard float support and fall back to that transparently. None of these seem particularly difficult to me, and saves us from a future world of pain. I know Michael Gottesman has some WIP code for cleaning up APFloat. Perhaps he could share it with you? —Owen