> Tanya M. Lattner wrote: > >> How large of a change have you made? With 3 days before the branch >> creation, I strongly advise people not to be checking in major changes. > > Depends how you look at it. Structurally, it separates two files into > four and moves some functionality from one class to a new class, so in a > sense that's a big change. Code-logic-wise, it does nothing at all. I > will send the patch to the commits list today. Hopefully someone can > look at it and decide whether to apply it. > > > We may need to change our proceedures for releases in the future. > > This is how we have done it in the past with no problem, but LLVM is > > growing much more rapidly now. > > In my experience, a code freeze lasts for a fair amount of time (on the > order of months). The way I've seen it done in many projects is that > there's a deadline to get all new feature work in (with more than a > week's notice!). Then the new branch is created. The next two or three > months, only bugfixes are allowed on the release branch. Some projects > close the development branch to force bugs to be fixed first, while > others run two branches in parallel. I would lean toward the latter and > trust people to be responsible enough to fix bugs before release. > > The release is done when there are no new regressions and all tests > created for new features pass. Of course, this means that folks > should be creating tests for their features. > > Do we want some kind of discussion about what this process should be > followed by a formal proposal drafted by a few people for comment and > possible adoption?It would be good to have a mailing list for test results where 'make check' results could be posted so that there is some reference and people could avoid repeating builds. Aaron
> On Sat, 5 May 2007, Aaron Gray wrote: >> It would be good to have a mailing list for test results where 'make >> check' >> results could be posted so that there is some reference and people could >> avoid repeating builds. > > llvm-testresults :)Great, feeling silly, I'll signon to that then :) Aaron
Aaron Gray wrote:> It would be good to have a mailing list for test results where 'make check' > results could be posted so that there is some reference and people could > avoid repeating builds.There's the llvm-testresults list, but I find it less than fully useful because it's not immediately obvious from scanning message subjects if there's been a test failure. It's a lot of messages to wade through and read to get this information. What about a Tinderbox-like setup where we could consult a web page to see the current status of the repository? Boost has a nice setup: http://engineering.meta-comm.com/boost-regression/CVS-RC_1_34_0/developer/index_release.html It's probably more complex than what we need. Maybe we just need a page grouping each test under it's suite and marking the result on each architecture. Something like this: <begin crappy ASCII art> --------------------------------------------------------------------------- | LLVM | Arch | i686-pc-linux-gnu | darwin- | osx.. --------------------------------------------------------------------------- | Suite | Test | Witty note to brighten developer day --------------------------------------------------------------------------- | CFrontend | 2002-05-24-Alloca.c | PASS | FAIL | XFAIL | | ... | | | --------------------------------------------------------------------------- | CBackend | ... | | | | ... | | | | </begin> FAILs would of course be marked in something pleasant like flourescent bright-magenta. Just an idea. Wow, I already wasted more of a Saturday on that than I should have. :-/ -Dave
On Sat, 5 May 2007, Aaron Gray wrote:> It would be good to have a mailing list for test results where 'make check' > results could be posted so that there is some reference and people could > avoid repeating builds.llvm-testresults :) -Chris -- http://nondot.org/sabre/ http://llvm.org/
On Sat, 5 May 2007, David Greene wrote:> There's the llvm-testresults list, but I find it less than fully useful > because it's not immediately obvious from scanning message subjects if > there's been a test failure. It's a lot of messages to wade through and > read to get this information.Right.> What about a Tinderbox-like setup where we could consult a web page to > see the current status of the repository? Boost has a nice setup: > > http://engineering.meta-comm.com/boost-regression/CVS-RC_1_34_0/developer/index_release.html > > It's probably more complex than what we need. Maybe we just need a > page grouping each test under it's suite and marking the result on > each architecture. Something like this:I think that this is a great idea. However, instead of picking up yet another setting of testing infrastructure, I think we should make what we have already (the nightly testers) better. In particular, the main page of the tester: http://llvm.org/nightlytest/ Already captures a lot of this: it tells you the number of unexpected failures, whether or not the build succeeded etc. You can even drill down to a specific machine, e.g.: http://llvm.org/nightlytest/machine.php?machine=120 I see several problems with this, all of which are solvable: 1. The tester script doesn't update and rebuild the CFE, so often you get failures due to an out-of-date CFE. 2. There is no way to group by architecture, target-triple, etc. 3. Minor stupid stuff: on the per-machine page, you can click on the test failures number and go to the list of unexpected failures, but you can't do that on the main page. -Chris -- http://nondot.org/sabre/ http://llvm.org/
>> It would be good to have a mailing list for test results where 'make check' >> results could be posted so that there is some reference and people could >> avoid repeating builds. > > There's the llvm-testresults list, but I find it less than fully useful > because it's not immediately obvious from scanning message subjects if > there's been a test failure. It's a lot of messages to wade through and > read to get this information. > > What about a Tinderbox-like setup where we could consult a web page to > see the current status of the repository? Boost has a nice setup: > > http://engineering.meta-comm.com/boost-regression/CVS-RC_1_34_0/developer/index_release.html > > It's probably more complex than what we need. Maybe we just need a > page grouping each test under it's suite and marking the result on > each architecture. Something like this:I agree the test results are not displayed well. Its not well known, but I have been redesigning our databases and scripts to enable better display of the data. Right now its not very easy to do what you describe. -Tanya