On Tue, Feb 9, 2016 at 10:30 PM Chris Lattner <clattner at apple.com> wrote:> > > On Feb 9, 2016, at 10:24 PM, Chandler Carruth via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > > > You've raised an important point here Pete, and while I disagree pretty > strongly with it (regardless of whether Lanai makes sense or not), I'm glad > that you've surfaced it where we can clearly look at the issue. > > > > The idea of "it really should have users outside of just the people who > have access to the HW" I think is deeply problematic for the project as a > whole. Where does it stop? > > > > While I may have the theoretical ability to get access to an AVR, > Hexagon, MSP430, SystemZ, or XCore processor... It is a practical > impossibility. There is no way that I, or I suspect 95% of LLVM > contributors, will be able to run code for all these platforms. And for > some of them, I suspect it is already the case that their only users have > access to specialized, quite hard to acquire hardware (both Hexagon[1] and > SystemZ come to mind). > > Yes, I think this is a reasonable point. The cheapest SystemZ system is > somewhere around $75K, so widespread availability isn’t really a relevant > criteria for accepting that. > > Given that, I personally have no objection to accepting the port as an > experimental backend. For it to be a non-experiemental backend, I think it > needs to have a buildbot running execution tests for the target. This can > either be a simulator or google could host a buildbot on the hardware they > presumably have and make the results public. >So, while I'm personally happy with this answer, I'm curious: why do you think this important to have? I can imagine several reasons myself, in the order they floated to mind: 1) Makes sure it isn't vaporware and actually works, etc. 2) Ensures a fairly easy way to tell if the maintainers have gone out to lunch - no build bot stays working without some attention. 3) Make it more clear whether a change to LLVM breaks that target Are there other issues you're thinking about? Looking just at these three, they all seem reasonable goals. I can see other ways of accomplishing #1 and #2 (there are lots of ways to establish community trust), but cannot see any other way of accomplishing #3. But I also see another option, which someone else mentioned up-thread: simply make only the regression tests be supported. Without a regression test case that exhibits a bug, no reverts or other complaints. It would be entirely up to the maintainer to find and reduce such test cases from any failure of execution tests out-of-tree. I prefer this option for perhaps a strange reason: as an LLVM developer I would find it *more* appealing to only ever have to care about <insert-random-target> once the maintainer produced a reduced test case for me. Having a buildbot check the execution and fail is actually not terribly helpful to me in most cases. Sometimes I was already suspicious of a patch and it helpfully confirms my suspicions, but most of the time I'm going to need a test case to do anything useful with the failure and I won't have the tools necessary to produce that test case merely because there is a build bot. Anyways, as I said, I think this is somewhat theoretical at this point, but it seems useful to kind of pin down both what our expectations are and *why*. -Chandler -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160210/e70fc747/attachment.html>
On 10 February 2016 at 06:44, Chandler Carruth via llvm-dev <llvm-dev at lists.llvm.org> wrote:> But I also see another option, which someone else mentioned up-thread: > simply make only the regression tests be supported. Without a regression > test case that exhibits a bug, no reverts or other complaints. It would be > entirely up to the maintainer to find and reduce such test cases from any > failure of execution tests out-of-tree.IMHO, it's about transparency and commitment. For experimental back-ends (like BPF), what you describe is perfectly fine. If we wanted BPF to be official, I'd personally only accept it if there was at least one buildbot with a minimal domain-specific set of tests. In the BPF case, I'd expect a Linux booting and running some arbitrary code expecting a certain result. In a hardware-simile, like your back-end, I'd expect some generic code to be compiled and run successfully, strongly biased to getting the test-suite running on it. If we make this official (and I agree that by far this is a highly theoretical point), it means people will start to use and report bugs, and not all bugs can be reduced to a FileCheck. If, OTOH, Google is to be the only user of this back-end *ever*, then I don't see *any* reason to move this in-tree other than "saving Google some time", which is not very sporting. There is a big difference, in my view, between a "technology preview" and a "cost-cutting decision". Most companies contributing to LLVM wouldn't care, and they could even contribute their own secret back-ends to LLVM as a result of it, but my personal and very strong opinion is that this would ultimately be very bad for the community at large. LLVM would become a graveyard of odd back-ends that don't relate to each other, that can't be proven right or wrong, that can't be tested or probed and we'd be at the mercy of all companies supporting them to "fix them as soon as they're able". A community that relies on the benevolence of companies is *not* a healthy community. Not to name names, but I'm sure you can think of a few related to OO languages and databases. So, my view of official back-ends is that: * It's available to the general public and is of interest to more than the company maintaining it * It has a clear way to pass/fail and add regression tests to make sure we don't break again * It allows the wider community to work on it, ie. not a selfish cost-cutting decision * It has *some* representation outside the company who supports it Experimental back-ends, as Chris said, have a much lower threshold. I'm surprised we still keeping the CppBackend for this long. cheers, --renato
On Wed, Feb 10, 2016 at 3:40 AM, Renato Golin via llvm-dev < llvm-dev at lists.llvm.org> wrote:> On 10 February 2016 at 06:44, Chandler Carruth via llvm-dev > <llvm-dev at lists.llvm.org> wrote: > > But I also see another option, which someone else mentioned up-thread: > > simply make only the regression tests be supported. Without a regression > > test case that exhibits a bug, no reverts or other complaints. It would > be > > entirely up to the maintainer to find and reduce such test cases from any > > failure of execution tests out-of-tree. > > IMHO, it's about transparency and commitment. > > For experimental back-ends (like BPF), what you describe is perfectly > fine. If we wanted BPF to be official, I'd personally only accept it > if there was at least one buildbot with a minimal domain-specific set > of tests.Why would that be a requirement for you?> In the BPF case, I'd expect a Linux booting and running some > arbitrary code expecting a certain result. In a hardware-simile, like > your back-end, I'd expect some generic code to be compiled and run > successfully, strongly biased to getting the test-suite running on it. > > If we make this official (and I agree that by far this is a highly > theoretical point), it means people will start to use and report bugs,and not all bugs can be reduced to a FileCheck. If, OTOH, Google is to> be the only user of this back-end *ever*, then I don't see *any* > reason to move this in-tree other than "saving Google some time", > which is not very sporting. >I think Chris Lattner enumerated some valuable points regarding the benefit of having a backend in-tree even if it's never intended for use outside a select group. "1) I imagine that there is a big win for you, not having to merge with mainline. Maintaining an out of tree backend is a pain :-) 2) For the community, this is probably a net loss since changes to common codegen could would be required to update your backend, but no one else in the community would benefit from the target being in mainline. 3) There is probably a small but non-zero benefit to keeping your team working directly on mainline, since you’re more likely to do ancillary work in ToT. If your development is in mainline, this work is most likely to go into llvm.orginstead of into your local branch. 4) There could be an educational benefit of having the backend, particularly if it has unique challenges to overcome."> > There is a big difference, in my view, between a "technology preview" > and a "cost-cutting decision".Cutting costs isn't inherently bad - we all strive for efficiency. The question is if it's the wrong tradeoff (if the wrong groups end up paying for it, or the overall cost is higher, etc).> Most companies contributing to LLVM > wouldn't care, and they could even contribute their own secret > back-ends to LLVM as a result of it, but my personal and very strong > opinion is that this would ultimately be very bad for the community at > large. LLVM would become a graveyardGraveyard sounds like abandoned backends - I think we're all agreed that if a backend (public or private) ever became abandoned, it would be swiftly removed.> of odd back-ends that don't > relate to each other,I'm not sure I follow this - what do you mean by "don't relate to each other" - most of our backends don't relate to each other, do they?> that can't be proven right or wrong, that can't > be tested or probed and we'd be at the mercy of all companies > supporting them to "fix them as soon as they're able". >Why is that a mercy, though? At any point it's as working as the group that cares about it working is willing to make it. The rest of the community need not care (unless it's abandoned - at which point it should be removed)> A community that relies on the benevolence of companies is *not* a > healthy community. Not to name names, but I'm sure you can think of a > few related to OO languages and databases. > > So, my view of official back-ends is that: > * It's available to the general public and is of interest to more > than the company maintaining it > * It has a clear way to pass/fail and add regression tests to make > sure we don't break again > * It allows the wider community to work on it, ie. not a selfish > cost-cutting decision > * It has *some* representation outside the company who supports it > > Experimental back-ends, as Chris said, have a much lower threshold. > I'm surprised we still keeping the CppBackend for this long. > > cheers, > --renato > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160210/baa28577/attachment.html>
I’m responding to several points below, and I want to make one thing perfectly clear: Given that Google has a great track record of contributing to LLVM, I am not particularly worried about this specific case. I’m only worried about this scenario because it will set a precedent for future backend submissions. If we accept Lanai, and someone comes up with an analogous situation, we won’t be able to say: “well yes, I agree this is exactly the same situation as Lanai was, but you don’t have a track record of contribution like Google’s, so no, we won’t accept your backend”. That just won’t fly. Because of that, I think we *have* to ignore the track record and (surely) good intentions of the organization contributing the code, and look at this from first principles. On Feb 9, 2016, at 10:44 PM, Chandler Carruth <chandlerc at google.com> wrote:> Given that, I personally have no objection to accepting the port as an experimental backend. For it to be a non-experiemental backend, I think it needs to have a buildbot running execution tests for the target. This can either be a simulator or google could host a buildbot on the hardware they presumably have and make the results public. > > So, while I'm personally happy with this answer, I'm curious: why do you think this important to have?My goal is specifically to add a burden to the people maintaining the port. If there is no burden, then it becomes too easy for the contributor to drop the port in and then forget about it. We end up carrying it around for years, and because people keep updating (non-execution) tests and the source code for the port, we’ll never know that it is broken in reality. Execution tests are the only real integration tests we have.> 2) Ensures a fairly easy way to tell if the maintainers have gone out to lunch - no build bot stays working without some attention.Yes, this is my concern.> But I also see another option, which someone else mentioned up-thread: simply make only the regression tests be supported. Without a regression test case that exhibits a bug, no reverts or other complaints. It would be entirely up to the maintainer to find and reduce such test cases from any failure of execution tests out-of-tree.This problematic for two reasons: 1) There is a huge difference between “having tests” and “having good tests that don’t break all the time”. 2) This doesn’t help with port abandonment. Historically, we have only removed successfully ports because “they don’t actually work”. AFAIK, we have never removed a working port that “might work well enough for some use case” just because the contributor hasn’t been heard from. The risk here is much higher with a proprietary target like this, because we have no (independent) way to measure whether the code is working in practice. Maybe I just carry too many scars here. Remember that I was the guy who got stuck with rewriting the "new” SPARC port from scratch, because otherwise we “couldn’t” remove the (barely working) “SparcV9" backend, which was dependent on a bunch of obsolete infrastructure. I didn’t even have a Sparc machine to test on, so I had to SSH into remote machines I got access to. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160210/e254170a/attachment.html>