Given variables a b c d of compatible scalar arithmetic types, consider the expression: a + b + c + d There are multiple implementation orders for computing this sum, and the one you want can be dependent on the source language specification. In particular, the + operation must not be considered commutative if these are floating point values and we care about error ranges. It is not obvious to me from the IR specification how the front end can specify prescriptively that some particular order of operation is required. I'm probably missing something very obvious here. shap
On Mar 25, 2008, at 8:32 PM, Jonathan S. Shapiro wrote:> Given variables a b c d of compatible scalar arithmetic types, > consider > the expression: > > a + b + c + d > > There are multiple implementation orders for computing this sum, and > the > one you want can be dependent on the source language specification. In > particular, the + operation must not be considered commutative if > these > are floating point values and we care about error ranges. > > It is not obvious to me from the IR specification how the front end > can > specify prescriptively that some particular order of operation is > required. > > I'm probably missing something very obvious here.LLVM IR is three address code, not a tree form. This requires the front-end to pick an ordering that works for it explicitly as it lowers to LLVM IR. -Chris
On Tue, 2008-03-25 at 20:42 -0700, Chris Lattner wrote:> LLVM IR is three address code, not a tree form. This requires the > front-end to pick an ordering that works for it explicitly as it > lowers to LLVM IR.I got that much. But I assume that optimization passes, if used, are entitled to rewrite the IR. For example: ANSI C requires that certain types of parenthesization be honored rigorously, while other operations can legally be combined or reordered. How does the front end specify in it's IR emission which kinds are which, so that the optimizer knows which parts it is not permitted to re-arrange?