Hi John. I don't think the 80 bit format was part of IEEE 754; I think it was an Intel invention for the 8087 chip (which I believe preceded that standard), and didn't make it into the standard. The standard does talk about 64 bit and 128 bit floating point formats, but not 80 bit. Duncan Murdoch On 04/02/2024 4:47 p.m., J C Nash wrote:> Slightly tangential: I had some woes with some vignettes in my > optimx and nlsr packages (actually in examples comparing to OTHER > packages) because the M? processors don't have 80 bit registers of > the old IEEE 754 arithmetic, so some existing "tolerances" are too > small when looking to see if is small enough to "converge", and one > gets "did not converge" type errors. There are workarounds, > but the discussion is beyond this post. However, worth awareness that > the code may be mostly correct except for appropriate tests of > smallness for these processors. > > JN > > > > > On 2024-02-04 11:51, Dirk Eddelbuettel wrote: >> >> On 4 February 2024 at 20:41, Holger Hoefling wrote: >> | I wanted to ask if people have good advice on how to debug M1Mac package >> | check errors when you don?t have a Mac? Is a cloud machine the best option >> | or is there something else? >> >> a) Use the 'mac builder' CRAN offers: >> https://mac.r-project.org/macbuilder/submit.html >> >> b) Use the newly added M1 runners at GitHub Actions, >> https://github.blog/changelog/2024-01-30-github-actions-introducing-the-new-m1-macos-runner-available-to-open-source/ >> >> Option a) is pretty good as the machine is set up for CRAN and builds >> fast. Option b) gives you more control should you need it. >> >> Dirk >> > > ______________________________________________ > R-devel at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel
80 bit registers (I don't have my original docs with me here in Victoria) seem to have been part of the 1985 standard to which I was one of the 31 named contributors. See https://stackoverflow.com/questions/612507/what-are-the-applications-benefits-of-an-80-bit-extended-precision-data-type or the Wikipedia item on IEEE 754. It appears to have been omitted from 2008 and 2020 versions, but is still (I believe) part of many processors. It's an internal precision for handling multiplications and accumulation, and not one of the storage modes. Most of the time this makes very little difference in results for R, since it is only some operations where the extended precision gets activated. If we store quantities, we get the regular precision. Thus very few situations using the M? chips give differences, but when they do, it is a nuisance. There is plenty of scope for debating the pros and cons of extended precision internally. Not having it likely contributes to speed / bang for the buck of the M? chips. But we do now have occasional differences in outcomes which will lead to confusion and extra work. JN On 2024-02-04 15:26, Duncan Murdoch wrote:> Hi John. > > I don't think the 80 bit format was part of IEEE 754; I think it was an Intel invention for the 8087 chip (which I > believe preceded that standard), and didn't make it into the standard. > > The standard does talk about 64 bit and 128 bit floating point formats, but not 80 bit. > > Duncan Murdoch > > On 04/02/2024 4:47 p.m., J C Nash wrote: >> Slightly tangential: I had some woes with some vignettes in my >> optimx and nlsr packages (actually in examples comparing to OTHER >> packages) because the M? processors don't have 80 bit registers of >> the old IEEE 754 arithmetic, so some existing "tolerances" are too >> small when looking to see if is small enough to "converge", and one >> gets "did not converge" type errors. There are workarounds, >> but the discussion is beyond this post. However, worth awareness that >> the code may be mostly correct except for appropriate tests of >> smallness for these processors. >> >> JN >> >> >> >> >> On 2024-02-04 11:51, Dirk Eddelbuettel wrote: >>> >>> On 4 February 2024 at 20:41, Holger Hoefling wrote: >>> | I wanted to ask if people have good advice on how to debug M1Mac package >>> | check errors when you don?t have a Mac? Is a cloud machine the best option >>> | or is there something else? >>> >>> a) Use the 'mac builder' CRAN offers: >>> ???? https://mac.r-project.org/macbuilder/submit.html >>> >>> b) Use the newly added M1 runners at GitHub Actions, >>> >>> https://github.blog/changelog/2024-01-30-github-actions-introducing-the-new-m1-macos-runner-available-to-open-source/ >>> >>> Option a) is pretty good as the machine is set up for CRAN and builds >>> fast. Option b) gives you more control should you need it. >>> >>> Dirk >>> >> >> ______________________________________________ >> R-devel at r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-devel >
> On Feb 5, 2024, at 12:26 PM, Duncan Murdoch <murdoch.duncan at gmail.com> wrote: > > Hi John. > > I don't think the 80 bit format was part of IEEE 754; I think it was an Intel invention for the 8087 chip (which I believe preceded that standard), and didn't make it into the standard. > > The standard does talk about 64 bit and 128 bit floating point formats, but not 80 bit. >Yes, the 80 bit was Intel-specific (motivated by internal operations, not as external format), but as it used to be most popular architecture, people didn't quite realize that tests relying on Intel results will be Intel-specific (PowerPC Macs had 128-bit floating point, but they were not popular enough to cause trouble in the same way). The IEEE standard allows "extended precision" formats, but doesn't prescribe their format or precision - and they are optional. Arm64 CPUs only support 64-bit double precision in hardware (true both on macOS and Windows), so only what is in the basic standard. There are 128-bit floating point solutions in software, but, obviously, they are a lot slower (several orders of magnitude). Apple has been asking for priorities in the scientific community and 128-bit floating number support was not something high on people's priority list. It is far from trivial, because there is a long list of operations (all variations of the math functions) so I wouldn't expect this to change anytime soon - in fact once Microsoft's glacial move is done we'll be likely seeing only 64-bit everywhere. That said even if you don't have a arm64 CPU, you can build R with --disable-long-double to get closer to the arm64 results if that is your worry. Cheers, Simon> > On 04/02/2024 4:47 p.m., J C Nash wrote: >> Slightly tangential: I had some woes with some vignettes in my >> optimx and nlsr packages (actually in examples comparing to OTHER >> packages) because the M? processors don't have 80 bit registers of >> the old IEEE 754 arithmetic, so some existing "tolerances" are too >> small when looking to see if is small enough to "converge", and one >> gets "did not converge" type errors. There are workarounds, >> but the discussion is beyond this post. However, worth awareness that >> the code may be mostly correct except for appropriate tests of >> smallness for these processors. >> JN >> On 2024-02-04 11:51, Dirk Eddelbuettel wrote: >>> >>> On 4 February 2024 at 20:41, Holger Hoefling wrote: >>> | I wanted to ask if people have good advice on how to debug M1Mac package >>> | check errors when you don?t have a Mac? Is a cloud machine the best option >>> | or is there something else? >>> >>> a) Use the 'mac builder' CRAN offers: >>> https://mac.r-project.org/macbuilder/submit.html >>> >>> b) Use the newly added M1 runners at GitHub Actions, >>> https://github.blog/changelog/2024-01-30-github-actions-introducing-the-new-m1-macos-runner-available-to-open-source/ >>> >>> Option a) is pretty good as the machine is set up for CRAN and builds >>> fast. Option b) gives you more control should you need it. >>> >>> Dirk >>> >> ______________________________________________ >> R-devel at r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-devel > > ______________________________________________ > R-devel at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel >