Folks, I was looking at LCOV (http://llvm.org/reports/coverage/) and it's nice and all, but it doesn't have much information about which commit is that and the difference between two commits. We could then have a that report for every buildbot (check-all, test-suite, etc) for the patches specific to the build, per architecture. How easy would be to do that for any given buildbot? Another potential project would be to get on a specific architecture, patch by patch, and check how many of the *changed* lines are touched by the current tests, including the ones added, for say check-all. Since we hope to have good coverage on check-all, this should be a good indication of how well tested is each patch, and could give us an *additional* measure of quality. Would anyone be interested in taking those projects? Shall I add them to the list of ideas in http://llvm.org/OpenProjects.html? cheers, --renato
On Wed, May 6, 2015 at 8:28 AM, Renato Golin <renato.golin at linaro.org> wrote:> Folks, > > I was looking at LCOV (http://llvm.org/reports/coverage/) and it's > nice and all, but it doesn't have much information about which commit > is that and the difference between two commits. We could then have a > that report for every buildbot (check-all, test-suite, etc) for the > patches specific to the build, per architecture. How easy would be to > do that for any given buildbot? > > Another potential project would be to get on a specific architecture, > patch by patch, and check how many of the *changed* lines are touched > by the current tests, including the ones added, for say check-all. > Since we hope to have good coverage on check-all, this should be a > good indication of how well tested is each patch, and could give us an > *additional* measure of quality. >I'd love to have this. It's tiresome manually looking at patches/tests to see if the error cases have been exercised, etc. (of course all coverage based test quality assessment falls into the trap of "exercised but not verified" which is why I'd also love the mutation testing support that's been discussed on-list recently (possibly with the domain restricted to the changed/added lines in the patch for a fast pass, then a longer running pass that might catch knock-on effects elsewhere in the code)) This might also wrap back around to the idea of running all the target-independent regression tests against all compiled targets. (currently we run them against the host target, but there's no reason we can't run them on another host against any target we've built support for)> Would anyone be interested in taking those projects? Shall I add them > to the list of ideas in http://llvm.org/OpenProjects.html? >Seems reasonable.> > cheers, > --renato > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150506/d65fcc32/attachment.html>
I could not easily locate this on http://llvm.org/reports/coverage/ so asking here: what workload is the coverage computed over? IOW, what all does the bot run to get this coverage information? -- Sanjoy On Wed, May 6, 2015 at 10:17 AM, David Blaikie <dblaikie at gmail.com> wrote:> > > On Wed, May 6, 2015 at 8:28 AM, Renato Golin <renato.golin at linaro.org> > wrote: >> >> Folks, >> >> I was looking at LCOV (http://llvm.org/reports/coverage/) and it's >> nice and all, but it doesn't have much information about which commit >> is that and the difference between two commits. We could then have a >> that report for every buildbot (check-all, test-suite, etc) for the >> patches specific to the build, per architecture. How easy would be to >> do that for any given buildbot? >> >> Another potential project would be to get on a specific architecture, >> patch by patch, and check how many of the *changed* lines are touched >> by the current tests, including the ones added, for say check-all. >> Since we hope to have good coverage on check-all, this should be a >> good indication of how well tested is each patch, and could give us an >> *additional* measure of quality. > > > I'd love to have this. It's tiresome manually looking at patches/tests to > see if the error cases have been exercised, etc. > > (of course all coverage based test quality assessment falls into the trap of > "exercised but not verified" which is why I'd also love the mutation testing > support that's been discussed on-list recently (possibly with the domain > restricted to the changed/added lines in the patch for a fast pass, then a > longer running pass that might catch knock-on effects elsewhere in the > code)) > > This might also wrap back around to the idea of running all the > target-independent regression tests against all compiled targets. (currently > we run them against the host target, but there's no reason we can't run them > on another host against any target we've built support for) > >> >> Would anyone be interested in taking those projects? Shall I add them >> to the list of ideas in http://llvm.org/OpenProjects.html? > > > Seems reasonable. > >> >> >> cheers, >> --renato >> _______________________________________________ >> LLVM Developers mailing list >> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu >> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > > > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >
On 6 May 2015 at 18:17, David Blaikie <dblaikie at gmail.com> wrote:> On Wed, May 6, 2015 at 8:28 AM, Renato Golin <renato.golin at linaro.org> >> Would anyone be interested in taking those projects? Shall I add them >> to the list of ideas in http://llvm.org/OpenProjects.html? > > Seems reasonable.http://llvm.org/OpenProjects.html#coverage cheers, --renato