On Wed, Feb 1, 2017 at 9:19 AM, Michael Kruse <llvmdev at meinersbur.de> wrote:> 2017-02-01 18:07 GMT+01:00 Kostya Serebryany <kcc at google.com>: > > Yes, I used to run clang-fuzzer and clang-format-fuzzer on this bot, but > not > > any more. > > The reason is simple -- the bot was always red (well, orange) and the > bugs > > were never fixed. > > > > Currently we run clang-fuzzer (but not clang-format-fuzzer) on our > internal > > fuzzing infra > > and Richard has fixed at least one bug found this way. > > http://llvm.org/viewvc/llvm-project?view=revision&revision=291030 > > > > My llvm fuzzing bot was pretty naive and simple. > > If we want proper continuous fuzzing for parts of LLVM we either need to > > build a separate "real" continuous fuzzing process, > > or use an existing one. Luckily, there is one :) > > As a pilot I've recently added the cxa_demangler_fuzzer to OSS-Fuzz: > > https://github.com/google/oss-fuzz/tree/master/projects/llvm_libcxxabi > > It even found one bug which Mehdi already fixed! > > http://llvm.org/viewvc/llvm-project?view=revision&revision=293330 > > The bug report itself will become public in ~4 days: > > https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=370 > > Thanks for the explanation. > > > > >> > Another (obvious?) fuzzing candidate would be the LLVM's bitcode > >> > reader. I ran afl-fuzz on it and it found lots of failed assertions > >> > within seconds. Isn't fuzzing done on a regular basis as [1] suggests > >> > should be done? Should I report the crashes found by it? > >> > >> The bitcode reader is known to not be robust against malformed inputs. > > > > > > Yes, I afraid the bitcode reader (as some other parts of LLVM) are not > > robust enough to withstand fuzzing. :( > > Note that if we want to use libFuzzer (which is an in-process fuzzer) the > > target should not assert/abort/exit on any input (if it's not a bug). > > Is there any incentive to change that?Not that I know of.> A google Summer of Code project maybe? >Maybe. The bottleneck is not bug finding, but bug fixing, which sometimes may require large changes. And doing code review for such changes might be more work than just making them. --kcc> > Michael >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170201/cc8d21b7/attachment.html>
> On Feb 1, 2017, at 9:27 AM, Kostya Serebryany <kcc at google.com> wrote: > > > > On Wed, Feb 1, 2017 at 9:19 AM, Michael Kruse <llvmdev at meinersbur.de <mailto:llvmdev at meinersbur.de>> wrote: > 2017-02-01 18:07 GMT+01:00 Kostya Serebryany <kcc at google.com <mailto:kcc at google.com>>: > > Yes, I used to run clang-fuzzer and clang-format-fuzzer on this bot, but not > > any more. > > The reason is simple -- the bot was always red (well, orange) and the bugs > > were never fixed. > > > > Currently we run clang-fuzzer (but not clang-format-fuzzer) on our internal > > fuzzing infra > > and Richard has fixed at least one bug found this way. > > http://llvm.org/viewvc/llvm-project?view=revision&revision=291030 <http://llvm.org/viewvc/llvm-project?view=revision&revision=291030> > > > > My llvm fuzzing bot was pretty naive and simple. > > If we want proper continuous fuzzing for parts of LLVM we either need to > > build a separate "real" continuous fuzzing process, > > or use an existing one. Luckily, there is one :) > > As a pilot I've recently added the cxa_demangler_fuzzer to OSS-Fuzz: > > https://github.com/google/oss-fuzz/tree/master/projects/llvm_libcxxabi <https://github.com/google/oss-fuzz/tree/master/projects/llvm_libcxxabi> > > It even found one bug which Mehdi already fixed! > > http://llvm.org/viewvc/llvm-project?view=revision&revision=293330 <http://llvm.org/viewvc/llvm-project?view=revision&revision=293330> > > The bug report itself will become public in ~4 days: > > https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=370 <https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=370> > > Thanks for the explanation. > > > > >> > Another (obvious?) fuzzing candidate would be the LLVM's bitcode > >> > reader. I ran afl-fuzz on it and it found lots of failed assertions > >> > within seconds. Isn't fuzzing done on a regular basis as [1] suggests > >> > should be done? Should I report the crashes found by it? > >> > >> The bitcode reader is known to not be robust against malformed inputs. > > > > > > Yes, I afraid the bitcode reader (as some other parts of LLVM) are not > > robust enough to withstand fuzzing. :( > > Note that if we want to use libFuzzer (which is an in-process fuzzer) the > > target should not assert/abort/exit on any input (if it's not a bug). > > Is there any incentive to change that? > > Not that I know of. > > A google Summer of Code project maybe? > > Maybe. > The bottleneck is not bug finding, but bug fixing, which sometimes may require large changes. > And doing code review for such changes might be more work than just making them. >For the bitcode for example, I wouldn’t expect it to be large changes that would be complicated to review. However these are still tedious bugs to fix. About a GSOC, my own personal opinion is that we should try to give interesting / fun projects to student and not use them as cheap labor to fix the small bugs and issues we’re not able to prioritize ourself. My 2 cents :) — Mehdi -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170201/a9ba04f7/attachment.html>
On Wed, Feb 1, 2017 at 9:50 AM, Mehdi Amini via llvm-dev < llvm-dev at lists.llvm.org> wrote:> > On Feb 1, 2017, at 9:27 AM, Kostya Serebryany <kcc at google.com> wrote: > > > > On Wed, Feb 1, 2017 at 9:19 AM, Michael Kruse <llvmdev at meinersbur.de> > wrote: > >> 2017-02-01 18:07 GMT+01:00 Kostya Serebryany <kcc at google.com>: >> > Yes, I used to run clang-fuzzer and clang-format-fuzzer on this bot, >> but not >> > any more. >> > The reason is simple -- the bot was always red (well, orange) and the >> bugs >> > were never fixed. >> > >> > Currently we run clang-fuzzer (but not clang-format-fuzzer) on our >> internal >> > fuzzing infra >> > and Richard has fixed at least one bug found this way. >> > http://llvm.org/viewvc/llvm-project?view=revision&revision=291030 >> > >> > My llvm fuzzing bot was pretty naive and simple. >> > If we want proper continuous fuzzing for parts of LLVM we either need to >> > build a separate "real" continuous fuzzing process, >> > or use an existing one. Luckily, there is one :) >> > As a pilot I've recently added the cxa_demangler_fuzzer to OSS-Fuzz: >> > https://github.com/google/oss-fuzz/tree/master/projects/llvm_libcxxabi >> > It even found one bug which Mehdi already fixed! >> > http://llvm.org/viewvc/llvm-project?view=revision&revision=293330 >> > The bug report itself will become public in ~4 days: >> > https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=370 >> >> Thanks for the explanation. >> >> >> >> >> > Another (obvious?) fuzzing candidate would be the LLVM's bitcode >> >> > reader. I ran afl-fuzz on it and it found lots of failed assertions >> >> > within seconds. Isn't fuzzing done on a regular basis as [1] suggests >> >> > should be done? Should I report the crashes found by it? >> >> >> >> The bitcode reader is known to not be robust against malformed inputs. >> > >> > >> > Yes, I afraid the bitcode reader (as some other parts of LLVM) are not >> > robust enough to withstand fuzzing. :( >> > Note that if we want to use libFuzzer (which is an in-process fuzzer) >> the >> > target should not assert/abort/exit on any input (if it's not a bug). >> >> Is there any incentive to change that? > > > Not that I know of. > > >> A google Summer of Code project maybe? >> > > Maybe. > The bottleneck is not bug finding, but bug fixing, which sometimes may > require large changes. > And doing code review for such changes might be more work than just making > them. > > > For the bitcode for example, I wouldn’t expect it to be large changes that > would be complicated to review. However these are still tedious bugs to fix. > About a GSOC, my own personal opinion is that we should try to give > interesting / fun projects to student and not use them as cheap labor to > fix the small bugs and issues we’re not able to prioritize ourself. >I got started on LLVM in college working on "small bugs and issues we’re not able to prioritize ourself" (e.g. refactoring TableGen). Yes, it's not flashy, but people in the community do appreciate it. Also, IMO most of the "hard part" of learning to work on LLVM for real (and OSS in general) is learning the development workflow, interacting with the community, etc. and for that, small fixes are actually just as good (if not better) preparation than working on some flashy thing. My experience is that most of the work (in terms of time) to be done on real software projects is bug fixes and small issues (i.e. maintenance), so it's good to be comfortable doing that and treating it as a normal thing rather than a "chore"; and you can only get that kind of experience working on a real project that has maintenance to do :) -- Sean Silva> My 2 cents :) > > — > Mehdi > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170201/fb38f81c/attachment.html>
2017-02-01 18:50 GMT+01:00 Mehdi Amini <mehdi.amini at apple.com>:> About a GSOC, my own personal opinion is that we should try to give > interesting / fun projects to student and not use them as cheap labor to fix > the small bugs and issues we’re not able to prioritize ourself.I'd see this from a different perspective. The GSoC says its focus is "bringing more student developers into open source software development" [1]. For such a goal maintenance work is more purposeful than an interesting side-project whose future is uncertain after the program ends. At least more interaction with the community and the code base is assured. Moreover, the student gets paid and is free to not apply for this kind of work; a luxury we employees usually do not have. Michael [1] https://summerofcode.withgoogle.com/