Folks, In recent discussions about accepting the Lanai back-end as official (built by default), there was the idea that we should have some official guidelines, if not rules, that all targets must (should?) follow when requesting to be added upstream, and later, to be officially supported. Here's a list of what more or less converged into the thread with my personal bias. Must haves to be accepted upstream as experimental: 1. There must be an active community behind the target. This community will be the maintainers of the target by providing buildbots, fixing bugs, answering the LLVM community's questions and making sure the new target doesn't break any of the other targets, or generic code. 2. The code must be free of contentious issues, for example, large changes in how the IR behaves or should be formed by the front-ends, unless agreed by the majority of the community via refactoring of the (LangRef) *before* the merge of the new target changes, following the IR backwards compatibility described in the developer's policy document. 3. The code has a compatible license, patent and copyright statements, or can be converted by the LLVM's own model. Where applicable: 4. The target should have either reasonable documentation on how it works (ISA, ABI, etc.) or a publicly available simulator/hardware (either free or cheap enough), so that developers can validate assumptions, understand constraints and review code that can affect the target. Preferably both. To be moved as an official target: 5. The target must have been in the tree for at least 6 months, with active contributions including: adding more tests conforming to the documents, fixing bugs reported by unrelated/generic changes, providing support to other members of the community. 6. The target's code must have been adapted to the developers policy as well as the coding standards. This can take a while and it should be fine to accept external code into LLVM as experimental, but not officially. A soft(-er?) requirement: 7. The test coverage is broad and well written (small tests, documented). Where applicable, both the ``check-all`` and ``test-suite`` must pass in at least one configuration (publicly demonstrated, ex. via buildbots). My takes on those: I don't think we should avoid 1, 2 and 3 for any reasons, or we'll be shooting ourselves in th foot. 4 is only applicable to actual executing targets, and may clash with existing targets like GPUs, BPF, etc. While I'd *really* like to have some execution model for GPUs, I'm not prepared to recommend jettisoning their back-ends without it. :) 5 states "6 months" as a ball-park. I'm not too fixed on that, and I welcome ideas if anyone feels strongly about time taken. I think it needs to give us enough time to assess the *other* points (es. community, tests, validation, response). 6 I think it's a strong factor and I feel very strongly about it. We had projects before merging with other code styles and policies, but as an "LLVM developer", I feel I should be able to contribute to any "officially branded LLVM project". So, if those targets want to be part of *our* community, they have to abide by the same rules. 7 is hard to know, and I'm not sure how much we can really enforce. Maybe demanding at least one public buildbot? What if the target can't execute or even run clang? Wouldn't the target's tests on other bots be enough? Opinions? cheers, --renato
+1, some sort of defined process is definitely needed. In particular I think there should be a guideline for when the backend has been reviewed "enough" before it can be committed. -Matt
> On Jul 25, 2016, at 4:18 PM, Renato Golin via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Folks, > > In recent discussions about accepting the Lanai back-end as official > (built by default), there was the idea that we should have some > official guidelines, if not rules, that all targets must (should?) > follow when requesting to be added upstream, and later, to be > officially supported. > > Here's a list of what more or less converged into the thread with my > personal bias. > > Must haves to be accepted upstream as experimental: > > 1. There must be an active community behind the target. This community will be > the maintainers of the target by providing buildbots, fixing bugs, answering > the LLVM community's questions and making sure the new target doesn't break > any of the other targets, or generic code. > > 2. The code must be free of contentious issues, for example, large > changes in how > the IR behaves or should be formed by the front-ends, unless agreed by the > majority of the community via refactoring of the (LangRef) *before* > the merge of > the new target changes, following the IR backwards compatibility described in > the developer's policy document. > > 3. The code has a compatible license, patent and copyright statements, or can be > converted by the LLVM's own model. > > Where applicable: > > 4. The target should have either reasonable documentation on how it works (ISA, > ABI, etc.) or a publicly available simulator/hardware (either free > or cheap enough), > so that developers can validate assumptions, understand constraints and review > code that can affect the target. Preferably both. > > To be moved as an official target: > > 5. The target must have been in the tree for at least 6 months, with active > contributions including: adding more tests conforming to the documents, fixing > bugs reported by unrelated/generic changes, providing support to other members > of the community. > > 6. The target's code must have been adapted to the developers policy as well as > the coding standards. This can take a while and it should be fine to > accept external code into LLVM as experimental, but not officially.FWIW I’m not fond of not having this as the experimental part in the first place. It is harder to have things moving after being upstreamed. I’d expect the upstreaming of a backend (into experimental state) to be piece by piece with proper code review according to the current project standard.> > A soft(-er?) requirement: > > 7. The test coverage is broad and well written (small tests, documented). Where > applicable, both the ``check-all`` and ``test-suite`` must pass in at least > one configuration (publicly demonstrated, ex. via buildbots).Same as before: this is a no-brainer should be applicable with any patch, even in experimental state: small and documented lit tests. — Mehdi> > My takes on those: > > I don't think we should avoid 1, 2 and 3 for any reasons, or we'll be > shooting ourselves in th foot. > > 4 is only applicable to actual executing targets, and may clash with > existing targets like GPUs, BPF, etc. While I'd *really* like to have > some execution model for GPUs, I'm not prepared to recommend > jettisoning their back-ends without it. :) > > 5 states "6 months" as a ball-park. I'm not too fixed on that, and I > welcome ideas if anyone feels strongly about time taken. I think it > needs to give us enough time to assess the *other* points (es. > community, tests, validation, response). > > 6 I think it's a strong factor and I feel very strongly about it. We > had projects before merging with other code styles and policies, but > as an "LLVM developer", I feel I should be able to contribute to any > "officially branded LLVM project". So, if those targets want to be > part of *our* community, they have to abide by the same rules. > > 7 is hard to know, and I'm not sure how much we can really enforce. > Maybe demanding at least one public buildbot? What if the target can't > execute or even run clang? Wouldn't the target's tests on other bots > be enough? > > Opinions? > > cheers, > --renato > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
On Mon, Jul 25, 2016 at 5:47 PM Mehdi Amini via llvm-dev < llvm-dev at lists.llvm.org> wrote:> > > On Jul 25, 2016, at 4:18 PM, Renato Golin via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > 6. The target's code must have been adapted to the developers policy as > well as > > the coding standards. This can take a while and it should be fine to > > accept external code into LLVM as experimental, but not officially. > > FWIW I’m not fond of not having this as the experimental part in the first > place. > It is harder to have things moving after being upstreamed. > I’d expect the upstreaming of a backend (into experimental state) to be > piece by piece with proper code review according to the current project > standard. >I'm not sure it makes sense to insist on backends being incrementally developed in the open. While I rather prefer that (WebAssembly has been fun to watch here), it doesn't seem realistic. Every backend I can think of other than WebAssembly and the original AArch64 port has come into existence out of tree, and only after proving useful to some folks decided to move into the tree. I'd personally advocate for keeping that door open. All that said,I completely agree regarding coding conventions and other basic stuff. I don't think we need to wait for the official point in time to insist on all of the fundamental coding standards and such being followed.> > > > > A soft(-er?) requirement: > > > > 7. The test coverage is broad and well written (small tests, > documented). Where > > applicable, both the ``check-all`` and ``test-suite`` must pass in at > least > > one configuration (publicly demonstrated, ex. via buildbots). > > Same as before: this is a no-brainer should be applicable with any patch, > even in experimental state: small and documented lit tests. >Yep, this seems like it should always be true regardless of the experimental nature. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160726/8bdcd4a1/attachment.html>
On Tue, Jul 26, 2016 at 2:25 AM, Matt Arsenault via llvm-dev < llvm-dev at lists.llvm.org> wrote:> +1, some sort of defined process is definitely needed. In particular I > think there should be a guideline for when the backend has been reviewed > "enough" before it can be committed. >IMHO, this is a very important point missed from the discussion so far. What if a new back-end ticks all the right boxes but simply not being reviewed by established members of the community? I expect this to be quite common, especially with LLVM broaden in popularity and reaching beyond its traditional developer / user base. We all have busy lives and it's understandable that existing core developers / maintainers have no time nor inclination to thoroughly review every new back-end. When enough is enough and it's OK to press "commit" button? Yours, Andrey -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160729/e7708c65/attachment.html>
On Tue, Jul 26, 2016 at 12:18:05AM +0100, Renato Golin via llvm-dev wrote:> In recent discussions about accepting the Lanai back-end as official > (built by default), there was the idea that we should have some > official guidelines, if not rules, that all targets must (should?) > follow when requesting to be added upstream, and later, to be > officially supported.There are two open questions for me here. (1) The list says nothing about using (appropiate) LLVM infrastructure like the MC subsystem. Should it be a requirements for (new) targets to support the full source-to-object chain? (2) What are the criterions for reviving targets? Consider the Alpha or Itanium target. Joerg
On 4 August 2016 at 17:31, Joerg Sonnenberger via llvm-dev <llvm-dev at lists.llvm.org> wrote:> (1) The list says nothing about using (appropiate) LLVM infrastructure > like the MC subsystem. Should it be a requirements for (new) targets to > support the full source-to-object chain?Hi Joerg, This is a clear task for code review, not target inclusion policy. This list is supposed to be timeless, and adding any kind of specific technology would need updating all the time, and can even have conflicting views (like MCJIT vs ORCJIT vs the new cool toy), or the old pass manager, vs. the new one, or FastISel vs SelectionDAG vs. GlobalISel, etc.> (2) What are the criterions for reviving targets? Consider the Alpha or > Itanium target.When a target is deprecated, it probably means its code is so old and rotten that it doesn't work any more, or that whatever it was meant to be doing, it's not doing any more. With that in mind, if anyone wants to modernise the code, they'll probably have to re-write most of it, and then it becomes a new back-end. Also, the community of deprecated targets has clearly failed some of the pre-requisites, so asking them to prove again would only be fair. cheers, --renato