Hi Chris,> I just meant -O3 as an example. I'd expect all -O levels to have the > same behavior. -O3 may run passes which are more "lossy" than -O1 > does though, and I'd expect us to put the most effort into making > passes run at -O1 update debug info.I'm not really sure that you could divide passes into "lossy" and "not so lossy" that easily. For example, SimplifyCFG will be run at every -O level. This would imply that it must be a "not so lossy" pass, since we don't want to completely thrash debugging info at -O1. However, being "not so lossy" would probably mean that SimplifyCFG will have to skip a few simplifications. So, you will have to choose between two goals: 1) -g should not affect the outputted code 2) Transformations should preserve as much debug info as possible I can't think of any way to properly combine these goals. It seems that goal 1) is more import to you, so giving 1) more priority than 2) could work out, at least for the llvm-gcc code which defines optimization levels in this way. However, I can imagine that having a -preserve-debugging flag, in addition to the -O levels, would be very much welcome for developers (which would then make goal 2) more important than 1)). Perhaps not so much as an option to llvm-gcc, but even more so when using llvm as a library to create a custom compiler. Do you agree that goal 2) should be possible (even on the long term), or do you think that llvm should never need it? In the latter case, I'll stop discussing this, because for our project we don't really need it (though I would very much like it myself, as an individual developer). Say we do want goal 2) to be possible (of course not at the sime time as goal 1)), some kind of debugging preservation level is required AFAICS (Can't think of any other solutions anyway). Now, even if we think that goal 1) is more important on the short term, I would still suggest implementing this level right now. Even though support for goal 2) will not be complete right away (we can focus on 1) first), easy cases could be caught right away. I'm afraid that only focussing on 1) now and later adding 2) might cause a lot of extra work and missed corner cases. However, I might be completely miscalculating this. If you think that this will not be a problem, or not a significat problem, I'll stop discussing it as well, and just commit my changes to get us to goal 1).> These three levels are actually a completely different approach, on an > orthogonal axis (reducing the size of debug info).I'm not really sure what you mean with this. The idea behind the levels is to find the balance in the optimization vs debug info completeness tradeoff. I totally agree with keeping debug info consistent in all cases. Problems occur when an optimization can't keep the debug info full consistent: It must then either remove debug info or refrain from performing the optimization. These levels will determine the balance between those two levels. Throwing away more debug info will obviously reduce the size of the debug info, but that's in no way a goal and only a side product.> I actually disagree strongly with these three levels, as the assumption is > that we are willing to allow different codegen to get better debug info.Yes, this is indeed a tradeoff that I want to be able to make (see above). This seems to be the fundamental point in this discussion :-)> I think that codegen should be controlled with -O (and friends) and that > -g[123] should affect the size of debug info (e.g. whether macros are > included, etc). If the default "-g" option corresponded to "-g2", then I > think it would make sense for "-g1" to never emit location lists for > example, just to shrink debug info.I think that having the multiple -g options you describe is yet another axis, that is related to which debug info is generated in the first place.> 3. On an orthogonal axis (related to -g[123]), if an optimization is > capable of updating information, but doing so would generate large > debug info and the user doesn't want it - then it might choose to just > discard the debug info instead of doing the update.I'm not sure when an optimization would be generating "large debug info", but I'm not talking about any such thing.> Does this seem reasonable?I think we're at least getting closer to making our points of view clear :-) Gr. Matthijs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20080723/e1813b15/attachment.sig>
On Jul 23, 2008, at 8:08 AM, Matthijs Kooijman wrote:> Hi Chris, > >> I just meant -O3 as an example. I'd expect all -O levels to have the >> same behavior. -O3 may run passes which are more "lossy" than -O1 >> does though, and I'd expect us to put the most effort into making >> passes run at -O1 update debug info. > I'm not really sure that you could divide passes into "lossy" and > "not so > lossy" that easily. > > For example, SimplifyCFG will be run at every -O level. This would > imply that > it must be a "not so lossy" pass, since we don't want to completely > thrash > debugging info at -O1.Totally agreed,> However, being "not so lossy" would probably mean that SimplifyCFG > will have > to skip a few simplifications. So, you will have to choose between > two goals: > 1) -g should not affect the outputted code > 2) Transformations should preserve as much debug info as possible > > I can't think of any way to properly combine these goals. It seems > that goal > 1) is more import to you, so giving 1) more priority than 2) could > work out, > at least for the llvm-gcc code which defines optimization levels in > this way.I don't see how choosing between the two goals is necessary, you can have both. Take a concrete example, turning: if (c) { x = a; } else { x = b; } into: x = c ? a : b This is a case where our debug info won't be able to represent the xform correctly: if the select instruction is later expanded back out to a diamond in the code generator, we lost the line # info for the two assignments to x and the user won't be able to step into it. If the code generator doesn't expand it, you still have the same experience and there is no way to represent (in machine code) the original behavior. That said, it doesn't really matter. This is an example where simplifycfg can just discard the line # info and we accept the loss of debug info. Even when run at -O1, I consider this to be acceptable. My point is that presence of debug info should not affect what xforms get done, and that (as a Quality of Implementation issue) xforms should ideally update as much debug info as they can. If they can't (or it is too much work to) update the debug info, they can just discard it.> However, I can imagine that having a -preserve-debugging flag, in > addition to > the -O levels, would be very much welcome for developers (which > would then > make goal 2) more important than 1)). Perhaps not so much as an > option to > llvm-gcc, but even more so when using llvm as a library to create a > custom > compiler.Why? Is this is an "optimize as hard as you can without breaking debug info" flag? Who would use it (what use case)?> Do you agree that goal 2) should be possible (even on the long > term), or do > you think that llvm should never need it? In the latter case, I'll > stop > discussing this, because for our project we don't really need it > (though I > would very much like it myself, as an individual developer).I won't block such progress from being implemented, but I can't imagine llvm-gcc using it. I can see how it would make sense in the context of a JVM, when debugging hooks are enabled. Assuming that running at -O0 is not acceptable, this is a potential use case.>> These three levels are actually a completely different approach, on >> an >> orthogonal axis (reducing the size of debug info). > I'm not really sure what you mean with this. The idea behind the > levels is to > find the balance in the optimization vs debug info completeness > tradeoff.There is no balance here, the two options are: 1) debug info never changes generated code. 2) optimization never breaks debug info. The two are contradictory (unless all optimizations can perfectly update debug info, which they can't), so it is hard to balance them :). My perspective follows from use cases I imagine for C family of languages: I'll admit that other languages may certainly want #2. Can you talk about why you want this?> I totally agree with keeping debug info consistent in all cases. > Problems > occur when an optimization can't keep the debug info full > consistent: It must > then either remove debug info or refrain from performing the > optimization.In my proposal, the answer is to just remove the debug info, as above with the simplifycfg case.>> I think that codegen should be controlled with -O (and friends) and >> that >> -g[123] should affect the size of debug info (e.g. whether macros are >> included, etc). If the default "-g" option corresponded to "-g2", >> then I >> think it would make sense for "-g1" to never emit location lists for >> example, just to shrink debug info. > I think that having the multiple -g options you describe is yet > another axis, > that is related to which debug info is generated in the first place.Fair enough.>> Does this seem reasonable? > I think we're at least getting closer to making our points of view > clear :-):) -Chris
Hi guys,>> However, I can imagine that having a -preserve-debugging flag, in >> addition to >> the -O levels, would be very much welcome for developers (which >> would then >> make goal 2) more important than 1)). Perhaps not so much as an >> option to >> llvm-gcc, but even more so when using llvm as a library to create a >> custom >> compiler. > > Why? Is this is an "optimize as hard as you can without breaking > debug info" flag? Who would use it (what use case)?For those of us writing parallel and concurrent code this would be useful. Races may only manifest themselves under certain conditions that are triggered by optimized code, and tracking them down is really hard in the absence of debug information. If one of the debug info preserving optimizations is the one triggering the race then having this option would help out. Of course, if it's a destructive optimization then we are out of luck, but one can always hope. As a side note, I'm talking about both races introduced by reordering optimizations due to missing fences, and races that always exist but occur much more frequently because of the increase in speed in the optimized code. Luke
Hi Chris,> > 1) -g should not affect the outputted code > > 2) Transformations should preserve as much debug info as possible > > I don't see how choosing between the two goals is necessary, you can > have both. Take a concrete example, turning: > > if (c) { > x = a; > } else { > x = b; > } > > into: > > x = c ? a : b > > This is a case where our debug info won't be able to represent the > xform correctlySo that directly means choosing between two goals: You can either do the transformation, but change debug info, or you can keep debug info (and thus single stepping capabilities, for example) intact but that changes the resulting code output.> Why? Is this is an "optimize as hard as you can without breaking debug > info" flag? Who would use it (what use case)?The use case I see is when a bug is introduced or triggered by a transformation. Ie, I observe a bug in my program. I compile with -g -O0 (since I want full stepping capabilities), but now the bug is gone. So, I compile with -g -O2 and the bug is back, but my debugging info is severely crippled, making debugging a lot less fun. In this case, having the option of making the compiler try a bit harder to preserve debugging info as useful to ease debugging. As pointed out by Luke, there areas in which this is particularly important (he names parallel programming and synchronization). I do think that the weak point of this argument is that it the best it gets you is that debugging might get easier, if you're lucky, but it might also make the bug vanish again. To make this more specific, however, say that I have two nested loops. do { do { foo(); } while (a()); bar(); } while (b()); When compiled, the loop header of the inner loop is in a lot of cases an empty block, containing only phi nodes and an unconditional branch instruction. (Not sure if the above example does this, I don't have clang or llvm-gcc at hand atm). There is code in simplifycfg to remove such a block, which is possible in a lot of cases. However, when debugging info is enabled, a stoppoint will be generated inside such a block. This stoppoint represents the start of the inner loop (ie, just before the inner loop is exected for the first time, not the beginning of every iteration). By default (at in your approach, always) the basic block is removed and the stoppoint thrown away. This means that a fairly useful stoppoint is removed, even at -O1 (since simplifycfg will run then). I can see that in most cases, debugging at -O0 is probably sufficient. However, I can't help but thinking that even in a debugging build, perfect debugging info should be combinable with (partial) optimization. I'm no longer sure that it is as important as I initally thought, though it still feels like a shame if we would have no way whatsoever to be a bit more conservative about throwing away debug info.> There is no balance here, the two options are: > > 1) debug info never changes generated code. > 2) optimization never breaks debug info. > > The two are contradictory (unless all optimizations can perfectly > update debug info, which they can't), so it is hard to balance > them :).Especially because these two options are so contradictory, I can see a third option in the middle. The above two options, corresponding to the outer levels, are easy. If you can't update debug information through a transformation, you either ignore that (option 1) or leave the code unchanged (option 2). An extra middle level would try to find a balance. If the loss of debug info is "small", you go ahead with the transformation, but if you lose "a lot" of debug information, leave the code. The tricky part here is to define where the border between "small" and "a lot" is, but that could be left a little vague.> My perspective follows from use cases I imagine for C family > of languages: I'll admit that other languages may certainly want #2. > Can you talk about why you want this?As stated above, I don't have a particularly solid reason, other than a decent hunch of usefulness. Since Devang originally proposed the three-level scheme (I originally thought of having two levels only), perhaps he has some particular motivation to add to this discussion? :-) How would it be to add the proposed debugging levels, update some of the passes and see how it turns out? I'm not sure I can invest enough time to fully see this one through, though, since I'm going from fulltime to one day per week after next week... If we would add such a level, would you agree that the PassManager is a good place to store it? Gr. Matthijs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20080724/788dd1a7/attachment.sig>