On Thu, Apr 21, 2011 at 4:56 PM, Michael Clagett <mclagett at hotmail.com>
wrote:> It strikes
> me that the more optimizations applied to code (whether at the source code,
> byte code, intermediate language, or assembly level, the farther from the
> original source is the resulting optimized code base likely to drift. This
> would, I'm pretty sure, complicate whatever language debugging
capabilities
> one puts into places and make it more difficult to keep code execution
> aligned with a source code view in a step-debugging context.
>
> Does anyone know of any good sources for getting a handle on this issue and
> understanding strategies that IDE writers adopt to allow people to step
> through code that has been optimized?
The most important thing, obviously, is to make sure the compiler
generates debug information[1] and then to use a debugger that
understands the information the backend generates from that
(DWARF-format debug information on most non-Windows platforms -- I'm
not sure if LLVM supports any other debug format at the moment,
actually).
Many of the optimizers (maybe all of them by now, I'm not sure) try to
make sure to update the debug information to the best of their
ability, but it's unavoidable that the debugging experience will
deteriorate for some optimized code. (For instance, jumping
back-and-forth between consecutive source lines if the compiler is of
the opinion it's best to execute them in an intertwined manner)
[1]: Documentation at http://llvm.org/docs/SourceLevelDebugging.html