similar to: [LLVMdev] Representing -ffast-math at the IR level

Displaying 20 results from an estimated 40000 matches similar to: "[LLVMdev] Representing -ffast-math at the IR level"

2012 Apr 17
0
[LLVMdev] Representing -ffast-math at the IR level
Hi Kevin, > 1. Most compiler and back-end control of floating point behavior appears to be > motivated by controlling the loss or gain of a few low bits of precision on > a whole module scale. In fact, these concerns are usually insignificant for > programmers of floating-point intensive applications. The input to most > floating point computations have far lower
2012 Apr 14
9
[LLVMdev] Representing -ffast-math at the IR level
The attached patch is a first attempt at representing "-ffast-math" at the IR level, in fact on individual floating point instructions (fadd, fsub etc). It is done using metadata. We already have a "fpmath" metadata type which can be used to signal that reduced precision is OK for a floating point operation, eg %z = fmul float %x, %y, !fpmath !0 ... !0 = metadata
2012 Apr 14
2
[LLVMdev] Representing -ffast-math at the IR level
Hi Dmitry, > I'm not an expert in fp accuracy question, but I had quite a > few experience dealing with fp accuracy problems during compiler transformations. I agree that it's a minefield which is why I intend to proceed conservatively. > I think you have a step in the right direction, walking away from ULPs, which > are pretty useless for the purpose of describing allowed
2012 Apr 14
0
[LLVMdev] Representing -ffast-math at the IR level
On Sat, Apr 14, 2012 at 11:44 PM, Duncan Sands <baldrick at free.fr> wrote: > > I think you have a step in the right direction, walking away from ULPs, >> which >> are pretty useless for the purpose of describing allowed fp optimizations >> IMHO. >> But using just "fast" keyword (or whatever else will be added in the >> future) is >> not
2012 Apr 14
0
[LLVMdev] Representing -ffast-math at the IR level
Hi Duncan, I'm not an expert in fp accuracy question, but I had quite a few experience dealing with fp accuracy problems during compiler transformations. I think you have a step in the right direction, walking away from ULPs, which are pretty useless for the purpose of describing allowed fp optimizations IMHO. But using just "fast" keyword (or whatever else will be added in the
2012 Apr 16
0
[LLVMdev] Representing -ffast-math at the IR level
Duncan, I have some issues with representing this as a single "fast" mode flag, which mostly boil down to the fact that this is a very C-centric view of the world. And, since C compilers are not generally known for their awesomeness on issues of numerics, I'm not sure that's a good idea. Having something called a "fast" or "relaxed" mode implies that it is
2012 Apr 16
1
[LLVMdev] Representing -ffast-math at the IR level
Hi Owen, > I have some issues with representing this as a single "fast" mode flag, it isn't a single flag, that's the whole point of using metadata. OK, right now there is only one option (the "accuracy"), true, but the intent is that others will be added, and the meaning of accuracy tightened, later. MDBuilder has a createFastFPMath method which is intended to
2012 Apr 15
3
[LLVMdev] Representing -ffast-math at the IR level
Hi Dmitry, > That's possible (I already discussed this with Chandler), but in my opinion is > only worth doing if we see unreasonable increases in bitcode size in real code. > > > What is reasonable or not is defined not only by absolute numbers (0.8% or any > other number). Does it make sense to increase bitcode size by 1% if it's used > only by math library
2012 Apr 14
4
[LLVMdev] Representing -ffast-math at the IR level
Hi Dmitry, > The kinds of transforms I think can reasonably be done with the current > information are things like: x + 0.0 -> x; x / constant -> x * (1 / constant) if > constant and 1 / constant are normal (and not denormal) numbers. > > > The particular definition is not that important, as the fact that this > definition exists :) I.e. I think we need a
2012 Apr 15
1
[LLVMdev] Representing -ffast-math at the IR level
On Sun, Apr 15, 2012 at 1:20 PM, Renato Golin <rengolin at systemcall.org>wrote: > On 15 April 2012 09:07, Duncan Sands <baldrick at free.fr> wrote: > > Link-time optimization will sometimes result in "fast-math" functions > being > > inlined into non-fast math functions and vice-versa. This pretty much > > inevitably means that per-instruction
2012 Apr 14
0
[LLVMdev] Representing -ffast-math at the IR level
On Sun, Apr 15, 2012 at 1:02 AM, Duncan Sands <baldrick at free.fr> wrote: > Hi Dmitry, > > > The kinds of transforms I think can reasonably be done with the current >> information are things like: x + 0.0 -> x; x / constant -> x * (1 / >> constant) if >> constant and 1 / constant are normal (and not denormal) numbers. >> >> The
2012 Apr 15
0
[LLVMdev] Representing -ffast-math at the IR level
On 15 April 2012 09:07, Duncan Sands <baldrick at free.fr> wrote: > Link-time optimization will sometimes result in "fast-math" functions being > inlined into non-fast math functions and vice-versa.  This pretty much > inevitably means that per-instruction fpmath options are required. I guess it would be user error if a strict function used the results of a non-strict
2012 Apr 14
0
[LLVMdev] Representing -ffast-math at the IR level
On 14 April 2012 20:34, Duncan Sands <baldrick at free.fr> wrote: > the verifier checks that the accuracy operand is either a floating point > number (ConstantFP) or the keyword "fast".  If "Accuracy" is zero here > then that means it wasn't ConstantFP.  Thus it must have been the keyword > "fast". I think it's assuming too much. If I write
2012 Apr 14
2
[LLVMdev] Representing -ffast-math at the IR level
Hi Renato, > I'm not sure about this: > > + if (!Accuracy) > + // If it's not a floating point number then it must be 'fast'. > + return getFastAccuracy(); > > Since you allow accuracies bigger than 1 in setFPAccuracy(), integers > should be treated as float. Or at least assert. the verifier checks that the accuracy operand is either a floating
2012 Apr 16
2
[LLVMdev] Representing -ffast-math at the IR level
Thanks for the updates! Minor comments: + if (!Accuracy) + // If it's not a floating point number then it must be 'fast'. + return HUGE_VALF; Can we add an assert instead of a comment? It's just as documenting and will catch any goofs. + // If it's not a floating point number then it must be 'fast'. + return !isa<ConstantFP>(MD->getOperand(0));
2012 Apr 16
0
[LLVMdev] Representing -ffast-math at the IR level
Here's a revised patch, plus patches showing how fpmath metadata could be turned on in clang and dragonegg (it seemed safest for the moment to condition on -ffast-math rather than on one of the flags implied by -ffast-math). Major changes: - The FPMathOperator class can no longer be used to change math settings, only to read them. Currently it can be queried for accuracy info. I split the
2012 Nov 15
2
[LLVMdev] X86 rsqrt instruction generated
Hi, We have implemented the rsqrt instruction generation for X86 target architecture. We have introduced a flag -fp-rsqrt flag which controls the generatation of X86 rsqrt instruction generation. We have observed minor effects on precision due to rsqrt and hence has put these transformations under the mentioned flag. Note that -fp-rsqrt is only enabled with -enable-unsafe-fp-math flag presently.
2002 Oct 17
1
underflow handling in besselK (PR#2179)
The besselK() function knows about overflows/underflows internally; there is a constant xmax_BESS_K in src/nmath/bessel.h (and referred to only in bessel_k.c), equal to 705.342, which is checked if expon.scaled is FALSE. (The equivalent number for bessel_i.c is 709, defined as exparg_BESS in bessel.h.) However, besselK(x) silently returns +Inf if x>705.342. This behavior is reasonable for
2009 Jun 19
3
Floating point precision / guard digits? (PR#13771)
Full_Name: D Kreil Version: 2.8.1 and 2.9.0 OS: Debian Linux Submission from: (NULL) (141.244.140.179) Group: Accuracy I understand that most floating point numbers are approximated due to their binary storage. On the other hand, I thought that modern math CPUs used guard digits to protect against trivial underflows. Not true? # integers, no problem > 1+1+1==3 [1] TRUE # binary floating
2009 Jun 19
3
Floating point precision / guard digits? (PR#13771)
Full_Name: D Kreil Version: 2.8.1 and 2.9.0 OS: Debian Linux Submission from: (NULL) (141.244.140.179) Group: Accuracy I understand that most floating point numbers are approximated due to their binary storage. On the other hand, I thought that modern math CPUs used guard digits to protect against trivial underflows. Not true? # integers, no problem > 1+1+1==3 [1] TRUE # binary floating