James Henderson via llvm-dev
2020-Jan-28 09:59 UTC
[llvm-dev] DWARF debug line error handling changes
On Mon, 27 Jan 2020 at 23:22, David Blaikie <dblaikie at gmail.com> wrote:> Thanks for all the work & context, James! >Thanks for the reviews!> > On Mon, Jan 27, 2020 at 5:51 AM James Henderson < > jh7370.2008 at my.bristol.ac.uk> wrote: > >> Hi all, >> >> Since December, I've made several changes to the DWARF debug line parser >> to improve its error handling. Some examples include removing redundant >> error checks[1] and making address size mismatches non-fatal to parsing[2], >> with several more about to land or being planned. >> >> David Blaikie recommended I post something to give a bit more context to >> these changes. I am a member of Sony's Linker and Binutils team, and as >> part of the linking process, we make updates to the debug line section in >> order to handle GC-section processing and similar modifications. In order >> for these operations to be safe, we need the input line program to be in a >> good state. As a result, in addition to the existing LLVM error checks, we >> have added several additional downstream checks. These were added >> downstream at the time for speed, but we've always had the intention of >> bringing these to the wider community. We now have the time to do this. >> >> There are broadly-speaking two kinds of changes we are making: >> >> 1) Adding additional new checks to make sure the table is valid. Where >> possible, these are made as late as reasonable. That way, we don't emit >> errors or warnings until it is necessary, and potentially therefore not at >> all in some cases. >> >> 2) Making existing error checks less fatal, i.e. changing the code to >> allow the parser to continue even though something doesn't look right in >> the code. This allows consumers to gather more information, whether for >> displaying or otherwise. In several cases, the change is to assume that the >> length recorded in a field is correct, even if it doesn't match what either >> the standard says should be the length. >> > > If the standard conflicts with the length when we read the length (without > reading further) - yeah, I think we should fail/warn/something there. (as I > think we do - if you have a length that's not long enough for the header > we're expecting to read, for instance, etc) & if that means this > contribution can't be parsed (eg: header's truncated) we treat it that way > (this contribution is broken) & can still go on to the next contribution > and read that. >I think the code now uses the error callbacks for all potential length conflicts (or will do after I land some more warnings - not checked the current state). By default, this results in warnings, but clients can do extra stuff like ignoring the table and then continuing from the next. With my latest changes, the parser continues reading the header until the actual end for the file format, even if that goes beyond the claimed header end, reporting a warning if it does so, and filling in zeroes if it runs out of data, as per the DataExtractor API. The parser then starts parsing the body from the claimed header end until the unit length is reached. This provides us with maximum possible information (useful for dumpers), but still with the ability to stop (useful for things where correctness is more important).> > >> This won't always be the right thing to do (it might be that just the >> length field is correct), but providing more information, together with a >> warning that should indicate that something may not be quite right, is >> surely more useful than refusing to give the extra information. I picked >> the recorded length because other areas of the code already assume it to be >> correct, and use it to iterate onto other parts of the line table section. >> > > Hmm, I thought I saw somewhere (in the many active reviews at the moment, > some from Sony, some from some other folks - haven't got the full picture > myself) I thought someone, maybe Paul I thought, said something about > stopping/not continuing when we found an error in a previous contribution - > which doesn't sound right (& is different from what you've just said here). > Ah well, maybe I'm hallucinating. >Paul actually said in D71255 that we try to continue from the next contribution, although I did previously make an incorrect statement about length errors preventing this from happening. That being said, it is entirely up to the client whether to continue or not to the next contribution if they see something bad.> > It might be interesting to have some harder errors if contributions ever > overlap (if two CUs point to two different parts of debug_line, for > instance, and one of those pointers+parsed length overlaps with the other > pointer). That's a bad place to be and may mean we should give up on > parsing any of debug_line, or at least any unit that has such an overlap - > but harder to detect. >This sounds like it's a little outside the scope of the low-level debug line parser, and more something that belongs in DWARFContext or the verifier. That being said, some sort of warning would make sense (it would take some pretty crafty line table building to get two valid line tables that overlap).> > The alternative, I suppose is to go the other way (& that's something that > might be good/important to do at some point) & not parse any contributions > that aren't pointed to by a CU (well, debug_line in DWARFv5 is designed to > be parsed standalone too - and maybe everything in DWARFv5 is designed to > be parsed standalone, but we don't know whether a section contains only v5 > contributions (debug_line maybe should've changed name so we'd know it only > contained v5 contributions - like loclists and rnglists - ah well)... ) >I think in some contexts, maybe it is worth only parsing contributions pointed to by CUs, but again, this would be a higher-level decision. Indeed, Sony's linker modifications don't parse the .debug_info at all, only the .debug_line, so this wouldn't work for us.> Just rambling at this point. > > - Dave > > >> >> I hope this provides greater context for these changes! >> >> Regards, >> >> James >> >> [1] >> https://github.com/llvm/llvm-project/commit/418cd8216b41f4c08e0e1b22feda381d9b2345da >> [2] >> https://github.com/llvm/llvm-project/commit/07804f75a6cc506fada40c474f1e60840ce737d8 >> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200128/1241316b/attachment.html>
David Blaikie via llvm-dev
2020-Jan-28 19:37 UTC
[llvm-dev] DWARF debug line error handling changes
On Tue, Jan 28, 2020 at 1:59 AM James Henderson < jh7370.2008 at my.bristol.ac.uk> wrote:> > On Mon, 27 Jan 2020 at 23:22, David Blaikie <dblaikie at gmail.com> wrote: > >> Thanks for all the work & context, James! >> > > Thanks for the reviews! > > >> >> On Mon, Jan 27, 2020 at 5:51 AM James Henderson < >> jh7370.2008 at my.bristol.ac.uk> wrote: >> >>> Hi all, >>> >>> Since December, I've made several changes to the DWARF debug line parser >>> to improve its error handling. Some examples include removing redundant >>> error checks[1] and making address size mismatches non-fatal to parsing[2], >>> with several more about to land or being planned. >>> >>> David Blaikie recommended I post something to give a bit more context to >>> these changes. I am a member of Sony's Linker and Binutils team, and as >>> part of the linking process, we make updates to the debug line section in >>> order to handle GC-section processing and similar modifications. In order >>> for these operations to be safe, we need the input line program to be in a >>> good state. As a result, in addition to the existing LLVM error checks, we >>> have added several additional downstream checks. These were added >>> downstream at the time for speed, but we've always had the intention of >>> bringing these to the wider community. We now have the time to do this. >>> >>> There are broadly-speaking two kinds of changes we are making: >>> >>> 1) Adding additional new checks to make sure the table is valid. Where >>> possible, these are made as late as reasonable. That way, we don't emit >>> errors or warnings until it is necessary, and potentially therefore not at >>> all in some cases. >>> >>> 2) Making existing error checks less fatal, i.e. changing the code to >>> allow the parser to continue even though something doesn't look right in >>> the code. This allows consumers to gather more information, whether for >>> displaying or otherwise. In several cases, the change is to assume that the >>> length recorded in a field is correct, even if it doesn't match what either >>> the standard says should be the length. >>> >> >> If the standard conflicts with the length when we read the length >> (without reading further) - yeah, I think we should fail/warn/something >> there. (as I think we do - if you have a length that's not long enough for >> the header we're expecting to read, for instance, etc) & if that means this >> contribution can't be parsed (eg: header's truncated) we treat it that way >> (this contribution is broken) & can still go on to the next contribution >> and read that. >> > > I think the code now uses the error callbacks for all potential length > conflicts (or will do after I land some more warnings - not checked the > current state). By default, this results in warnings, but clients can do > extra stuff like ignoring the table and then continuing from the next. With > my latest changes, the parser continues reading the header until the actual > end for the file format, even if that goes beyond the claimed header end, > reporting a warning if it does so, >Oh... which change caused that to happen? Because I don't think that should happen. I'd really much prefer that once the length is read and it's checked to be in-bounds of the underlying section, that only those bytes within that specified length are read. I was/am actually looking at tidying up the repeated complications of parsing the DWARF32/DWARF64 headers - and one idea I had was to have the DWARFDataExtractor return a new DWARFDataExtractor that was specifically bounded to the parsed length for this reason.> and filling in zeroes if it runs out of data, as per the DataExtractor > API. The parser then starts parsing the body from the claimed header end > until the unit length is reached. This provides us with maximum possible > information (useful for dumpers), but still with the ability to stop > (useful for things where correctness is more important). > > >> >> >>> This won't always be the right thing to do (it might be that just the >>> length field is correct), but providing more information, together with a >>> warning that should indicate that something may not be quite right, is >>> surely more useful than refusing to give the extra information. I picked >>> the recorded length because other areas of the code already assume it to be >>> correct, and use it to iterate onto other parts of the line table section. >>> >> >> Hmm, I thought I saw somewhere (in the many active reviews at the moment, >> some from Sony, some from some other folks - haven't got the full picture >> myself) I thought someone, maybe Paul I thought, said something about >> stopping/not continuing when we found an error in a previous contribution - >> which doesn't sound right (& is different from what you've just said here). >> Ah well, maybe I'm hallucinating. >> > > Paul actually said in D71255 that we try to continue from the next > contribution, although I did previously make an incorrect statement about > length errors preventing this from happening. That being said, it is > entirely up to the client whether to continue or not to the next > contribution if they see something bad. > > >> >> It might be interesting to have some harder errors if contributions ever >> overlap (if two CUs point to two different parts of debug_line, for >> instance, and one of those pointers+parsed length overlaps with the other >> pointer). That's a bad place to be and may mean we should give up on >> parsing any of debug_line, or at least any unit that has such an overlap - >> but harder to detect. >> > > This sounds like it's a little outside the scope of the low-level debug > line parser, and more something that belongs in DWARFContext or the > verifier. That being said, some sort of warning would make sense (it would > take some pretty crafty line table building to get two valid line tables > that overlap). > > >> >> The alternative, I suppose is to go the other way (& that's something >> that might be good/important to do at some point) & not parse any >> contributions that aren't pointed to by a CU (well, debug_line in DWARFv5 >> is designed to be parsed standalone too - and maybe everything in DWARFv5 >> is designed to be parsed standalone, but we don't know whether a section >> contains only v5 contributions (debug_line maybe should've changed name so >> we'd know it only contained v5 contributions - like loclists and rnglists - >> ah well)... ) >> > > I think in some contexts, maybe it is worth only parsing contributions > pointed to by CUs, but again, this would be a higher-level decision. > Indeed, Sony's linker modifications don't parse the .debug_info at all, > only the .debug_line, so this wouldn't work for us. >*nod* figured that might be the case - good to know!> > >> Just rambling at this point. >> >> - Dave >> >> >>> >>> I hope this provides greater context for these changes! >>> >>> Regards, >>> >>> James >>> >>> [1] >>> https://github.com/llvm/llvm-project/commit/418cd8216b41f4c08e0e1b22feda381d9b2345da >>> [2] >>> https://github.com/llvm/llvm-project/commit/07804f75a6cc506fada40c474f1e60840ce737d8 >>> >>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200128/e1f7812c/attachment-0001.html>
Pavel Labath via llvm-dev
2020-Jan-29 10:28 UTC
[llvm-dev] DWARF debug line error handling changes
On 28/01/2020 20:37, David Blaikie via llvm-dev wrote:> and one idea I had was to have the DWARFDataExtractor return a new > DWARFDataExtractor that was specifically bounded to the parsed length > for this reason.I think this would be great. In fact, I was getting ready to propose something like that myself. :) FWIW, lldb DataExtractors already support this kind of slicing. pl
James Henderson via llvm-dev
2020-Jan-29 12:34 UTC
[llvm-dev] DWARF debug line error handling changes
On Tue, 28 Jan 2020 at 19:37, David Blaikie <dblaikie at gmail.com> wrote:> > > On Tue, Jan 28, 2020 at 1:59 AM James Henderson < > jh7370.2008 at my.bristol.ac.uk> wrote: > >> >> On Mon, 27 Jan 2020 at 23:22, David Blaikie <dblaikie at gmail.com> wrote: >> >>> Thanks for all the work & context, James! >>> >> >> Thanks for the reviews! >> >> >>> >>> On Mon, Jan 27, 2020 at 5:51 AM James Henderson < >>> jh7370.2008 at my.bristol.ac.uk> wrote: >>> >>>> Hi all, >>>> >>>> Since December, I've made several changes to the DWARF debug line >>>> parser to improve its error handling. Some examples include removing >>>> redundant error checks[1] and making address size mismatches non-fatal to >>>> parsing[2], with several more about to land or being planned. >>>> >>>> David Blaikie recommended I post something to give a bit more context >>>> to these changes. I am a member of Sony's Linker and Binutils team, and as >>>> part of the linking process, we make updates to the debug line section in >>>> order to handle GC-section processing and similar modifications. In order >>>> for these operations to be safe, we need the input line program to be in a >>>> good state. As a result, in addition to the existing LLVM error checks, we >>>> have added several additional downstream checks. These were added >>>> downstream at the time for speed, but we've always had the intention of >>>> bringing these to the wider community. We now have the time to do this. >>>> >>>> There are broadly-speaking two kinds of changes we are making: >>>> >>>> 1) Adding additional new checks to make sure the table is valid. Where >>>> possible, these are made as late as reasonable. That way, we don't emit >>>> errors or warnings until it is necessary, and potentially therefore not at >>>> all in some cases. >>>> >>>> 2) Making existing error checks less fatal, i.e. changing the code to >>>> allow the parser to continue even though something doesn't look right in >>>> the code. This allows consumers to gather more information, whether for >>>> displaying or otherwise. In several cases, the change is to assume that the >>>> length recorded in a field is correct, even if it doesn't match what either >>>> the standard says should be the length. >>>> >>> >>> If the standard conflicts with the length when we read the length >>> (without reading further) - yeah, I think we should fail/warn/something >>> there. (as I think we do - if you have a length that's not long enough for >>> the header we're expecting to read, for instance, etc) & if that means this >>> contribution can't be parsed (eg: header's truncated) we treat it that way >>> (this contribution is broken) & can still go on to the next contribution >>> and read that. >>> >> >> I think the code now uses the error callbacks for all potential length >> conflicts (or will do after I land some more warnings - not checked the >> current state). By default, this results in warnings, but clients can do >> extra stuff like ignoring the table and then continuing from the next. With >> my latest changes, the parser continues reading the header until the actual >> end for the file format, even if that goes beyond the claimed header end, >> reporting a warning if it does so, >> > > Oh... which change caused that to happen? Because I don't think that > should happen. I'd really much prefer that once the length is read and it's > checked to be in-bounds of the underlying section, that only those bytes > within that specified length are read. I was/am actually looking at tidying > up the repeated complications of parsing the DWARF32/DWARF64 headers - and > one idea I had was to have the DWARFDataExtractor return a new > DWARFDataExtractor that was specifically bounded to the parsed length for > this reason. >Actually, I had a slightly faulty memory. To a large extent, the behaviour has been for a while to parse until the end of the header as per the standard format. There is then a check once parsing is finished to make sure the reached offset matches the expected end. Recently, I removed[1] a few extra checks in the V5 file/directory table parsing that checked that we hadn't run off the end, as the later check makes them unnecessary. However, these were the only other checks to check against the prologue length. I think your suggestion to use a data extractor that only knows about the bytes we expect is a good idea. Combined with the changes that made it return 0 when running off the end, this will fit nicely into the rest of the code without worrying about reading bytes in other tables. The reading would continue as far as needed, the error check would still happen, but no spurious bytes would be read. [1] https://github.com/llvm/llvm-project/commit/418cd8216b41f4c08e0e1b22feda381d9b2345da -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200129/161abda2/attachment.html>
Possibly Parallel Threads
- DWARF debug line error handling changes
- Building llvm with clang and lld on arm and the llvm arm backend relocation on position independent code
- [DebugInfo] DWARF C API
- Fragmented DWARF
- Building llvm with clang and lld on arm and the llvm arm backend relocation on position independent code