On 26 April 2013 19:49, Tim Northover <t.p.northover at gmail.com> wrote:> > To me, the expanding of regular IR will achieve nearly the same result as > > building a lower level IR. > > Remember that we basically already have a lower level IR consisting of > basic blocks of MachineInstrs at the moment.To an extent this has > already proven itself capable of modelling targets, and making it a > first-class IR might be a reasonable amount of work (certainly easier > than SelectionDAG). >This is the point I was going to make and I think Tim hit the core of it. My (week) opinion is that: Adding lowering information to the IR is NOT the same as building another, lower-level IR. It'll open doors to places we don't want to go, like intermixing different levels, allowing for physical registers to be named in IR, changing many optimizations to worry about lower level IR, etc. I see it with the same disgust as I see inline assembly in C code. MachineInstrs is a lower level description that is clearly separated from the LLVM IR and has been converted from it for a long time. As Chris said, a stable high-level IR is very important for front-end and optimization developers, but back-end developers need to tweak it to make it work on their architectures. My conclusion is that we might need to formalize a low level IR, based on MIs, and allow it a very loose leash. Why formalize if we already have it working, you ask? I think that even a feeble formalization will improve how code is shared among different back-ends. It'll also be an easy route for a new legalization framework without having to deprecate much code, and without having to leave too much old code dangling on less active back-ends. Each step of stronger formalization can be taken on its own time, implementing on most back-ends, iteratively. As Evan said, whatever we do, this move will take years to complete, much like MC. So, we better plan on something that will be stable throughout the years, rather than try for something quick and drastic, and have hundreds of new bugs dangling for years with no good current solution. My tuppence. cheers, --renato -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130426/f64b0a63/attachment.html>
On Fri, Apr 26, 2013 at 2:09 PM, Renato Golin <renato.golin at linaro.org>wrote:> On 26 April 2013 19:49, Tim Northover <t.p.northover at gmail.com> wrote: > >> > To me, the expanding of regular IR will achieve nearly the same result >> as >> > building a lower level IR. >> >> Remember that we basically already have a lower level IR consisting of >> basic blocks of MachineInstrs at the moment.To an extent this has >> already proven itself capable of modelling targets, and making it a >> first-class IR might be a reasonable amount of work (certainly easier >> than SelectionDAG). >> > > This is the point I was going to make and I think Tim hit the core of it. > > My (week) opinion is that: > > Adding lowering information to the IR is NOT the same as building another, > lower-level IR. It'll open doors to places we don't want to go, like > intermixing different levels, allowing for physical registers to be named > in IR, changing many optimizations to worry about lower level IR, etc. I > see it with the same disgust as I see inline assembly in C code. >To all, I'm moving on and accepting what appears to be the consensus of the list, for now. That said, I believe it would be easy to have levels and prohibit mixing. Just have the Verifier pass reject the new intrinsics. A new CodeGenVerifier pass could be added which accepts them, and tools would run the verifier for the kind of input they expect. There'd be no need to change any existing optimizers. No need to even add any new text to LangRef. The new intrinsics would be documented elsewhere. Also, there's no proposal here for physical registers or non-SSA registers anything else like that. I think people are making slippery-slope arguments here, but I also think that a change which requires modifying the optimizers would be a point where the slippery-slope could be practically bounded. Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130426/f534cb8f/attachment.html>
On Fri, Apr 26, 2013 at 11:33 PM, Dan Gohman <dan433584 at gmail.com> wrote:> To all, I'm moving on and accepting what appears to be the consensus of > the list, for now. >I want to point out something about this direction that hasn't really come up, but I think deserves some better discussion. I don't think it should be the basis of a decision one way or the other, its more a consequence of the decision. At the IR level, we have some great infrastructure that doesn't exist at the MI level: - The pass management tools. - A verifier that can be run before and after any pass to check the basic invariants. - The ability to serialize and deserialize to/from a human understandable (and authorable) form. I think before we invest in *significantly* more complexity and logic in the MI layer of the optimizer, we will need it to have these three things. Without them, the work will be considerably harder, and we will continue to be unable to do fine grained testing during the development of new features. We might not need all of the capabilities we have in the IR, but I think we'll need at least those used to orchestrate fine grained testing and validation. Of course, adding these to MI would be of great benefit to any number of other aspects of LLVM's development. I am *not* arguing we should eschew MI because it lacks these things. I just want people to understand that part of the cost of deciding that MI is the right layer for this is needing to invest in these pieces of the MI layer. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130427/eb69a42e/attachment.html>