Eric Christopher via llvm-dev
2015-Aug-31 18:41 UTC
[llvm-dev] RFC: LTO should use -disable-llvm-verifier
On Mon, Aug 31, 2015 at 11:40 AM Duncan P. N. Exon Smith via llvm-dev < llvm-dev at lists.llvm.org> wrote:> > > On 2015-Aug-31, at 10:42, Reid Kleckner <rnk at google.com> wrote: > > > > On Mon, Aug 31, 2015 at 10:12 AM, Rafael Espíndola < > llvm-dev at lists.llvm.org> wrote: > > > > > Not sure I follow? Generally LTO inputs are going to be "user > provided" (in the sense that they're not produced immediately prior by the > same process - or you'd have just produced a single Module in the first > place, I would imagine) so changing the default still seems problematic in > the sense of allowing 'unbounded' input without verification... > > > > The common case is for the bitcode to be generated by a paired clang. > Even when it is an old bitcode compiled module, the Module itself is > created by the bitcode reader. > > > > Sure, but it is not uncommon to LTO with old bitcode. We all know it's > pretty easy to crash LLVM with bad bitcode or bad IR. These interfaces are > not thoroughly tested. > > > > I think verifying the result of the bitcode reader by default during LTO > is probably the right thing for the foreseeable future. It's the only thing > that has any hope of telling the user something useful when things go wrong. > > > > I'd like it if we spent a little effort understanding why it's slow > before flipping it off. Maybe the verifier is running multiple times, > instead of after deserialization. We shouldn't need that in release builds. > > LTO runs the verifier three times: > > 1. On each input module after it's parsed from bitcode. > 2. At the beginning of the optimization pipeline (post lib/Linker). > 3. At the end of the optimization pipeline. > > If we're worried about user input, then I agree we should still run it > at (1). But I don't think we need to run it at (2) and (3). Maybe we > agree on this? > >I agree. The last two in debug mode perhaps? -eric> Someone asked elsewhere in the thread about numbers: I had a look at a > CPU profile (for linking verify-uselistorder with debug info). For > this (not-necessarily-representative) sample, (1) takes 1.6% of ld64 > runtime, and (2) and (3) combined take 3.2% of ld64 runtime. Total of > 4.8%. > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150831/cfbb882d/attachment-0001.html>
Duncan P. N. Exon Smith via llvm-dev
2015-Aug-31 18:44 UTC
[llvm-dev] RFC: LTO should use -disable-llvm-verifier
> On 2015-Aug-31, at 11:41, Eric Christopher <echristo at gmail.com> wrote: > > > > On Mon, Aug 31, 2015 at 11:40 AM Duncan P. N. Exon Smith via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > > On 2015-Aug-31, at 10:42, Reid Kleckner <rnk at google.com> wrote: > > > > On Mon, Aug 31, 2015 at 10:12 AM, Rafael Espíndola <llvm-dev at lists.llvm.org> wrote: > > > > > Not sure I follow? Generally LTO inputs are going to be "user provided" (in the sense that they're not produced immediately prior by the same process - or you'd have just produced a single Module in the first place, I would imagine) so changing the default still seems problematic in the sense of allowing 'unbounded' input without verification... > > > > The common case is for the bitcode to be generated by a paired clang. Even when it is an old bitcode compiled module, the Module itself is created by the bitcode reader. > > > > Sure, but it is not uncommon to LTO with old bitcode. We all know it's pretty easy to crash LLVM with bad bitcode or bad IR. These interfaces are not thoroughly tested. > > > > I think verifying the result of the bitcode reader by default during LTO is probably the right thing for the foreseeable future. It's the only thing that has any hope of telling the user something useful when things go wrong. > > > > I'd like it if we spent a little effort understanding why it's slow before flipping it off. Maybe the verifier is running multiple times, instead of after deserialization. We shouldn't need that in release builds. > > LTO runs the verifier three times: > > 1. On each input module after it's parsed from bitcode. > 2. At the beginning of the optimization pipeline (post lib/Linker). > 3. At the end of the optimization pipeline. > > If we're worried about user input, then I agree we should still run it > at (1). But I don't think we need to run it at (2) and (3). Maybe we > agree on this? > > > I agree. The last two in debug mode perhaps?I figure it'd still be nice to be able to control it explicitly, so have the clang driver pass -disable-llvm-verifier in no-asserts, but *always* verify (1) regardless of that option. IOW, that makes sense to me :).> -eric > > Someone asked elsewhere in the thread about numbers: I had a look at a > CPU profile (for linking verify-uselistorder with debug info). For > this (not-necessarily-representative) sample, (1) takes 1.6% of ld64 > runtime, and (2) and (3) combined take 3.2% of ld64 runtime. Total of > 4.8%. > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
David Blaikie via llvm-dev
2015-Aug-31 18:49 UTC
[llvm-dev] RFC: LTO should use -disable-llvm-verifier
On Mon, Aug 31, 2015 at 11:44 AM, Duncan P. N. Exon Smith via llvm-dev < llvm-dev at lists.llvm.org> wrote:> > > On 2015-Aug-31, at 11:41, Eric Christopher <echristo at gmail.com> wrote: > > > > > > > > On Mon, Aug 31, 2015 at 11:40 AM Duncan P. N. Exon Smith via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > > > > On 2015-Aug-31, at 10:42, Reid Kleckner <rnk at google.com> wrote: > > > > > > On Mon, Aug 31, 2015 at 10:12 AM, Rafael Espíndola < > llvm-dev at lists.llvm.org> wrote: > > > > > > > Not sure I follow? Generally LTO inputs are going to be "user > provided" (in the sense that they're not produced immediately prior by the > same process - or you'd have just produced a single Module in the first > place, I would imagine) so changing the default still seems problematic in > the sense of allowing 'unbounded' input without verification... > > > > > > The common case is for the bitcode to be generated by a paired clang. > Even when it is an old bitcode compiled module, the Module itself is > created by the bitcode reader. > > > > > > Sure, but it is not uncommon to LTO with old bitcode. We all know it's > pretty easy to crash LLVM with bad bitcode or bad IR. These interfaces are > not thoroughly tested. > > > > > > I think verifying the result of the bitcode reader by default during > LTO is probably the right thing for the foreseeable future. It's the only > thing that has any hope of telling the user something useful when things go > wrong. > > > > > > I'd like it if we spent a little effort understanding why it's slow > before flipping it off. Maybe the verifier is running multiple times, > instead of after deserialization. We shouldn't need that in release builds. > > > > LTO runs the verifier three times: > > > > 1. On each input module after it's parsed from bitcode. > > 2. At the beginning of the optimization pipeline (post lib/Linker). > > 3. At the end of the optimization pipeline. > > > > If we're worried about user input, then I agree we should still run it > > at (1). But I don't think we need to run it at (2) and (3). Maybe we > > agree on this? > > > > > > I agree. The last two in debug mode perhaps? > > I figure it'd still be nice to be able to control it explicitly, so have > the clang driver pass -disable-llvm-verifier in no-asserts, but *always* > verify (1) regardless of that option. IOW, that makes sense to me :). >Sure, something like that could be reasonable. (essentially anything that parallels clang's behavior when given bitcode/IR as direct input I think is the right model (they should be consistent, and perhaps there's stuff we need to discuss about what set of things will be the consistent behavior, but something like "sanitize inputs, assume intermediate results are valid (if they aren't, it's bugs) and verify intermediate results in +Asserts builds", maybe.. *shrug*))> > > -eric > > > > Someone asked elsewhere in the thread about numbers: I had a look at a > > CPU profile (for linking verify-uselistorder with debug info). For > > this (not-necessarily-representative) sample, (1) takes 1.6% of ld64 > > runtime, and (2) and (3) combined take 3.2% of ld64 runtime. Total of > > 4.8%. > > _______________________________________________ > > LLVM Developers mailing list > > llvm-dev at lists.llvm.org > > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150831/8d7e460e/attachment.html>