Paul E. McKenney via llvm-dev
2016-Feb-27 23:10 UTC
[llvm-dev] [isocpp-parallel] Proposal for new memory_order_consume definition
On Sat, Feb 27, 2016 at 11:16:51AM -0800, Linus Torvalds wrote:> On Feb 27, 2016 09:06, "Paul E. McKenney" <paulmck at linux.vnet.ibm.com> > wrote: > > > > > > But we do already have something very similar with signed integer > > overflow. If the compiler can see a way to generate faster code that > > does not handle the overflow case, then the semantics suddenly change > > from twos-complement arithmetic to something very strange. The standard > > does not specify all the ways that the implementation might deduce that > > faster code can be generated by ignoring the overflow case, it instead > > simply says that signed integer overflow invoked undefined behavior. > > > > And if that is a problem, you use unsigned integers instead of signed > > integers. > > Actually, in the case of there Linux kernel we just tell the compiler to > not be an ass. We use > > -fno-strict-overflowThat is the one!> or something. I forget the exact compiler flag needed for "the standard is > as broken piece of shit and made things undefined for very bad reasons". > > See also there idiotic standard C alias rules. Same deal.For which we use -fno-strict-aliasing.> So no, standards aren't that important. When the standards screw up, the > right answer is not to turn the other cheek.Agreed, hence my current (perhaps quixotic and insane) attempt to get the standard to do something useful for dependency ordering. But if that doesn't work, yes, a fallback position is to get the relevant compilers to provide flags to avoid problematic behavior, similar to -fno-strict-overflow. Thanx, Paul> And undefined behavior is pretty much *always* a sign of "the standard is > wrong". > > Linus
Markus Trippelsdorf via llvm-dev
2016-Feb-28 08:27 UTC
[llvm-dev] [isocpp-parallel] Proposal for new memory_order_consume definition
On 2016.02.27 at 15:10 -0800, Paul E. McKenney via llvm-dev wrote:> On Sat, Feb 27, 2016 at 11:16:51AM -0800, Linus Torvalds wrote: > > On Feb 27, 2016 09:06, "Paul E. McKenney" <paulmck at linux.vnet.ibm.com> > > wrote: > > > > > > > > > But we do already have something very similar with signed integer > > > overflow. If the compiler can see a way to generate faster code that > > > does not handle the overflow case, then the semantics suddenly change > > > from twos-complement arithmetic to something very strange. The standard > > > does not specify all the ways that the implementation might deduce that > > > faster code can be generated by ignoring the overflow case, it instead > > > simply says that signed integer overflow invoked undefined behavior. > > > > > > And if that is a problem, you use unsigned integers instead of signed > > > integers. > > > > Actually, in the case of there Linux kernel we just tell the compiler to > > not be an ass. We use > > > > -fno-strict-overflow > > That is the one! > > > or something. I forget the exact compiler flag needed for "the standard is > > as broken piece of shit and made things undefined for very bad reasons". > > > > See also there idiotic standard C alias rules. Same deal. > > For which we use -fno-strict-aliasing.Do not forget -fno-delete-null-pointer-checks. So the kernel obviously is already using its own C dialect, that is pretty far from standard C. All these options also have a negative impact on the performance of the generated code. -- Markus
Linus Torvalds via llvm-dev
2016-Feb-28 16:13 UTC
[llvm-dev] [isocpp-parallel] Proposal for new memory_order_consume definition
On Sun, Feb 28, 2016 at 12:27 AM, Markus Trippelsdorf <markus at trippelsdorf.de> wrote:>> > >> > -fno-strict-overflow >> >> -fno-strict-aliasing. > > Do not forget -fno-delete-null-pointer-checks. > > So the kernel obviously is already using its own C dialect, that is > pretty far from standard C. > All these options also have a negative impact on the performance of the > generated code.They really don't. Have you ever seen code that cared about signed integer overflow? Yeah, getting it right can make the compiler generate an extra ALU instruction once in a blue moon, but trust me - you'll never notice. You *will* notice when you suddenly have a crash or a security issue due to bad code generation, though. The idiotic C alias rules aren't even worth discussing. They were a mistake. The kernel doesn't use some "C dialect pretty far from standard C". Yeah, let's just say that the original C designers were better at their job than a gaggle of standards people who were making bad crap up to make some Fortran-style programs go faster. They don't speed up normal code either, they just introduce undefined behavior in a lot of code. And deleting NULL pointer checks because somebody made a mistake, and then turning that small mistake into a real and exploitable security hole? Not so smart either. The fact is, undefined compiler behavior is never a good idea. Not for serious projects. Performance doesn't come from occasional small and odd micro-optimizations. I care about performance a lot, and I actually look at generated code and do profiling etc. None of those three options have *ever* shown up as issues. But the incorrect code they generate? It has. Linus
Apparently Analagous Threads
- [isocpp-parallel] Proposal for new memory_order_consume definition
- [isocpp-parallel] Proposal for new memory_order_consume definition
- [isocpp-parallel] Proposal for new memory_order_consume definition
- [isocpp-parallel] Proposal for new memory_order_consume definition
- [isocpp-parallel] Proposal for new memory_order_consume definition