> One issue I found was the cpu behaviour when converting from > float/double to int32 when the source float is outside the range > of values that can be represented by the int32. > > For instance: > > float int32_ppc in32_x86 > 2147483649.0 2147483647 -2147483648 > 2147483648.0 2147483647 -2147483648 > 2147483647.0 2147483647 2147483647 (0x7fffffff) > .... > -2147483648.0 -2147483648 -2147483648 > -2147483649.0 -2147483648 -2147483648 (0x80000000) > -2147483650.0 -2147483648 -2147483648 > > As you can see out of range floats are correctly clipped on PPC, > but only correctly clipped for negative floats on x86.Well, AFAIK C89 says the behaviour in undefined, so I wouldn't rely on that too much. Even on PPC, it's quite possible someday the gcc folks will be able to 1) determine at compile time that a conversion somewhere will overflow and 2) optimise it out (since it's undefined anyway) or do other funny things with it. That's why I wouldn't depend on the right thing happening, even on PPC.> libsndfile has code a bit like this: > > if (CPU_CLIPS_POSITIVE == 0 && scaled_value >= 1.0 * 0x7FFFFFFF) > int_value = 0x7fffffff ; > > if (CPU_CLIPS_NEGATIVE == 0 && scaled_value <= (-8.0 * 0x10000000)) > int_value = 0x80000000 ; > > On PPC, the above two lines get optimised out. On Intel x86, only > the second gets optimised out.Sounds a bit dangerous to be considering what I wrote above. I tend to consider the following: "If your code depends on non-specified behaviour, sooner or later a gcc developer will come up with a clever way of breaking it".>> Do they affect all code or just highly-optimized code? > > The float -> int issue affects all code where the source float > is greater than 0x7fffffff. Yes it is a rather obscure corner > case.Well, how obscure it is really depends on the range of your samples. I'd tend to say that if you expect something that fits in 16-bits than you don't really care what happens if the input it 2^16 times too big, because it's catastrophic anyway.> AFAIAC, all code should have a test suite. For universal binary > builds (and any cross compile builds), the test suite must be run > on both platforms.I'm afraid I haven't been Doing It Your Way so far :-( But I'm quite willing to integrate contributed test cases :-) Jean-Marc
Jean-Marc Valin wrote:> Well, AFAIK C89 says the behaviour in undefined, so I wouldn't rely on > that too much. Even on PPC, it's quite possible someday the gcc folks > will be able to 1) determine at compile time that a conversion somewhere > will overflow and 2) optimise it out (since it's undefined anyway) or do > other funny things with it. That's why I wouldn't depend on the right > thing happening, even on PPC.Except for one thing. I have an M4 macro which tests for the behaviour at configure time. The result of that test is then used at compile time. I think its reasonable to expect that the same version of gcc is used at configure and compile times :-).> I'm afraid I haven't been Doing It Your Way so farNever too late to start :-).> :-( But I'm quite willing to integrate contributed test cases :-)I'll put it on my todo list. I should get around to it some time after hell freezes over :-). Erik -- ----------------------------------------------------------------- Erik de Castro Lopo ----------------------------------------------------------------- "To me C++ seems to be a language that has sacrificed orthogonality and elegance for random expediency." -- Meilir Page-Jones
>> Well, AFAIK C89 says the behaviour in undefined, so I wouldn't rely on >> that too much. Even on PPC, it's quite possible someday the gcc folks >> will be able to 1) determine at compile time that a conversion somewhere >> will overflow and 2) optimise it out (since it's undefined anyway) or do >> other funny things with it. That's why I wouldn't depend on the right >> thing happening, even on PPC. > > Except for one thing. I have an M4 macro which tests for the behaviour > at configure time. The result of that test is then used at compile > time. > > I think its reasonable to expect that the same version of gcc is used > at configure and compile times :-).Not quite. What I'd be worried about is that in some very particular circumstances, gcc be able to figure out that "OK, this float->int conversion at line X is guaranteed to overflow, so I'll just ignore it (or actively do something stupid)". This could even happen in cases where gcc compiles N versions of your code (to optimise for different input patterns) and it could cause one of the versions to have a predictable overflow. As long as the input is not predictable at compile time, gcc should be doing the right thing, which is to use the conversion instruction blindly. However, as soon as an overflow becomes predictable at compile time (due do some constraints on the input), it could be dangerous. It's basically the same thing for integer overflows. If an add or a sub is guaranteed to overflow, gcc is allowed to optimise it away (not sure whether it actually does that at the moment though).>> I'm afraid I haven't been Doing It Your Way so far > > Never too late to start :-). > >> :-( But I'm quite willing to integrate contributed test cases :-) > > I'll put it on my todo list. I should get around to it some time > after hell freezes over :-).OK, let me know when hell freezes over so I can prepare :-) Jean-Marc