On Tue, Feb 9, 2016 at 2:37 PM, Chris Lattner via llvm-dev < llvm-dev at lists.llvm.org> wrote:> > > On Feb 9, 2016, at 9:40 AM, Jacques Pienaar via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > > > Hi all, > > > > We would like to contribute a new backend for the Lanai processor > (derived from the processor described in [1]). > > Hi Jacques, > > We generally have a low bar for accepting new “experimental” backends, but > I think that this is the first proposal to add a target for a hardware that > general LLVM contributors can’t have access to. As such, we’ll have to > figure out as a community whether this makes sense. > > Here are the tradeoffs I see of accepting the backend: > > 1) I imagine that there is a big win for you, not having to merge with > mainline. Maintaining an out of tree backend is a pain :-) > > 2) For the community, this is probably a net loss since changes to common > codegen could would be required to update your backend, but no one else in > the community would benefit from the target being in mainline. > > 3) There is probably a small but non-zero benefit to keeping your team > working directly on mainline, since you’re more likely to do ancillary work > in ToT. If your development is in mainline, this work is most likely to go > into llvm.org instead of into your local branch. > > 4) There could be an educational benefit of having the backend, > particularly if it has unique challenges to overcome. > > > What do others think about this? I know that several organizations have > not even bothered proposing internal-only targets for inclusion in > llvm.org, since they would effectively be contributing dead code that the > community would have to maintain. >One data point (IIRC) is that the NVPTX backend sat in tree for a long time without a way to actually use them. But lately this has been opening up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html). However, the obstacle for NVPTX was mostly a software proprietary-ness (no way to plug it into the driver stack really, except via nvidia's own proprietary software), whereas the actual hardware was available. For the Lanai stuff, it seems like the hardware is fundamentally not available for purchase. The reverse situation is with e.g. Apple's GPU backends, where the devices are readily available, but (AFAIK) even if the backend were open-source you couldn't run the code produced by the open-source compiler. Or to put it in matrix form (this is all heavily prefixed by "AFAIK"; corrections welcome): AMDGPU: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes NVPTX: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes Lanai: InTree:? DevicesAvailable:No CanIRunTheCode:No Apple GPU's: InTree:No DevicesAvailable:Yes CanIRunTheCode:No I couldn't come up with a good name for "Can I Run The Code" column. Basically it means: "assuming the backend were in open source, could I actually run the code produced by the open source backend somehow?". I had a quick look at lib/Target and it seems like every backend we have has "CanIRunTheCode:Yes" in theory. IIRC, the NVPTX stuff used to actually be "No" though? Anyway, just a random thought. Not sure what the conclusion is. -- Sean Silva> -Chris > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160209/ee1df43b/attachment.html>
On Tue, Feb 9, 2016 at 4:18 PM, Sean Silva <chisophugis at gmail.com> wrote:> > > On Tue, Feb 9, 2016 at 2:37 PM, Chris Lattner via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> >> > On Feb 9, 2016, at 9:40 AM, Jacques Pienaar via llvm-dev < >> llvm-dev at lists.llvm.org> wrote: >> > >> > Hi all, >> > >> > We would like to contribute a new backend for the Lanai processor >> (derived from the processor described in [1]). >> >> Hi Jacques, >> >> We generally have a low bar for accepting new “experimental” backends, >> but I think that this is the first proposal to add a target for a hardware >> that general LLVM contributors can’t have access to. As such, we’ll have >> to figure out as a community whether this makes sense. >> >> Here are the tradeoffs I see of accepting the backend: >> >> 1) I imagine that there is a big win for you, not having to merge with >> mainline. Maintaining an out of tree backend is a pain :-) >> >> 2) For the community, this is probably a net loss since changes to common >> codegen could would be required to update your backend, but no one else in >> the community would benefit from the target being in mainline. >> >> 3) There is probably a small but non-zero benefit to keeping your team >> working directly on mainline, since you’re more likely to do ancillary work >> in ToT. If your development is in mainline, this work is most likely to go >> into llvm.org instead of into your local branch. >> >> 4) There could be an educational benefit of having the backend, >> particularly if it has unique challenges to overcome. >> >> >> What do others think about this? I know that several organizations have >> not even bothered proposing internal-only targets for inclusion in >> llvm.org, since they would effectively be contributing dead code that >> the community would have to maintain. >> > > One data point (IIRC) is that the NVPTX backend sat in tree for a long > time without a way to actually use them. But lately this has been opening > up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html). However, the > obstacle for NVPTX was mostly a software proprietary-ness (no way to plug > it into the driver stack really, except via nvidia's own proprietary > software >To clarify: I mean that only the proprietary software could use the backend in a useful way. Not that at some point in the driver stack proprietary software was needed. -- Sean Silva> ), whereas the actual hardware was available. For the Lanai stuff, it > seems like the hardware is fundamentally not available for purchase. > > The reverse situation is with e.g. Apple's GPU backends, where the devices > are readily available, but (AFAIK) even if the backend were open-source you > couldn't run the code produced by the open-source compiler. > > Or to put it in matrix form (this is all heavily prefixed by "AFAIK"; > corrections welcome): > > AMDGPU: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes > NVPTX: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes > Lanai: InTree:? DevicesAvailable:No CanIRunTheCode:No > Apple GPU's: InTree:No DevicesAvailable:Yes CanIRunTheCode:No > > I couldn't come up with a good name for "Can I Run The Code" column. > Basically it means: "assuming the backend were in open source, could I > actually run the code produced by the open source backend somehow?". > > I had a quick look at lib/Target and it seems like every backend we have > has "CanIRunTheCode:Yes" in theory. > IIRC, the NVPTX stuff used to actually be "No" though? > > Anyway, just a random thought. Not sure what the conclusion is. > > -- Sean Silva > > >> -Chris >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >> > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160209/ae452891/attachment.html>
Hi Sean I think you’ve summed it up really well here. Personally I don’t think we should accept backends for which there is no way to run the code. The burden (however small) on the community to having an in-tree backend they can’t use is too high IMO. As you point out ‘no way to run the code’ may mean not having access to HW, or having HW but no API. NVPTX is a good example. Now you can take the output from LLVM and run it on HW. It may or may not be how Nvidia do it in their code, but that doesn’t matter, you can do it. Same for AMDGPU. So -1 from me to having backends we can’t make use of. Finally, one option is to have perpetually experimental backends. Then all the code is in tree but no-one in tree should ever be expected to update it. That does have the big advantage that all of the code is there to discuss and the maintainers can make contributions to common code and gain/provide help in the community. They can also be involved in discussions which impact them such as changes to common code. Cheers, Pete> On Feb 9, 2016, at 4:18 PM, Sean Silva via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > One data point (IIRC) is that the NVPTX backend sat in tree for a long time without a way to actually use them. But lately this has been opening up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html <http://llvm.org/docs/CompileCudaWithLLVM.html>). However, the obstacle for NVPTX was mostly a software proprietary-ness (no way to plug it into the driver stack really, except via nvidia's own proprietary software), whereas the actual hardware was available. For the Lanai stuff, it seems like the hardware is fundamentally not available for purchase. > > The reverse situation is with e.g. Apple's GPU backends, where the devices are readily available, but (AFAIK) even if the backend were open-source you couldn't run the code produced by the open-source compiler. > > Or to put it in matrix form (this is all heavily prefixed by "AFAIK"; corrections welcome): > > AMDGPU: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes > NVPTX: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes > Lanai: InTree:? DevicesAvailable:No CanIRunTheCode:No > Apple GPU's: InTree:No DevicesAvailable:Yes CanIRunTheCode:No > > I couldn't come up with a good name for "Can I Run The Code" column. Basically it means: "assuming the backend were in open source, could I actually run the code produced by the open source backend somehow?". > > I had a quick look at lib/Target and it seems like every backend we have has "CanIRunTheCode:Yes" in theory. > IIRC, the NVPTX stuff used to actually be "No" though? > > Anyway, just a random thought. Not sure what the conclusion is.-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160209/90a44f3c/attachment.html>
----- Original Message -----> From: "Pete Cooper via llvm-dev" <llvm-dev at lists.llvm.org> > To: "Sean Silva" <chisophugis at gmail.com> > Cc: "llvm-dev" <llvm-dev at lists.llvm.org> > Sent: Tuesday, February 9, 2016 10:59:58 PM > Subject: Re: [llvm-dev] [RFC] Lanai backend > > > Hi Sean > > > I think you’ve summed it up really well here. > > > Personally I don’t think we should accept backends for which there is > no way to run the code. The burden (however small) on the community > to having an in-tree backend they can’t use is too high IMO. > > > As you point out ‘no way to run the code’ may mean not having access > to HW, or having HW but no API. >Out of curiosity, would the existence of some kind of open-source emulator affect your opinion on this? Or does it need to be actual hardware? -Hal> > NVPTX is a good example. Now you can take the output from LLVM and > run it on HW. It may or may not be how Nvidia do it in their code, > but that doesn’t matter, you can do it. Same for AMDGPU. > > > So -1 from me to having backends we can’t make use of. > > > Finally, one option is to have perpetually experimental backends. > Then all the code is in tree but no-one in tree should ever be > expected to update it. That does have the big advantage that all of > the code is there to discuss and the maintainers can make > contributions to common code and gain/provide help in the community. > They can also be involved in discussions which impact them such as > changes to common code. > > > Cheers, > Pete > > > > > On Feb 9, 2016, at 4:18 PM, Sean Silva via llvm-dev < > llvm-dev at lists.llvm.org > wrote: > > > One data point (IIRC) is that the NVPTX backend sat in tree for a > long time without a way to actually use them. But lately this has > been opening up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html > ). However, the obstacle for NVPTX was mostly a software > proprietary-ness (no way to plug it into the driver stack really, > except via nvidia's own proprietary software), whereas the actual > hardware was available. For the Lanai stuff, it seems like the > hardware is fundamentally not available for purchase. > > > The reverse situation is with e.g. Apple's GPU backends, where the > devices are readily available, but (AFAIK) even if the backend were > open-source you couldn't run the code produced by the open-source > compiler. > > > Or to put it in matrix form (this is all heavily prefixed by "AFAIK"; > corrections welcome): > > > AMDGPU: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes > NVPTX: InTree:Yes DevicesAvailable:Yes CanIRunTheCode :Yes > Lanai: InTree:? DevicesAvailable:No CanIRunTheCode :No > Apple GPU's: InTree:No DevicesAvailable:Yes CanIRunTheCode :No > > I couldn't come up with a good name for "Can I Run The Code" column. > Basically it means: "assuming the backend were in open source, could > I actually run the code produced by the open source backend > somehow?". > > > I had a quick look at lib/Target and it seems like every backend we > have has "CanIRunTheCode:Yes" in theory. > IIRC, the NVPTX stuff used to actually be "No" though? > > > Anyway, just a random thought. Not sure what the conclusion is. > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-- Hal Finkel Assistant Computational Scientist Leadership Computing Facility Argonne National Laboratory
I dunno. We consistently tell pretty much everyone we'd love for them to work upstream. We see orgs work on general optimizations, etc, out of tree, and tell them "you really should try to upstream it". I can't count the number of times people say this to others at the dev conference, no *matter what they are doing*. It's one thing when the developers in question just want a backend and contribute nothing else and want everyone else to deal with it and keep it running. I'd agree that provides burden and nothing else. That seems pretty unlikely here. It's not like the optimizations/general infrastructure we are writing/people improve gets used in a vacuum. It's meant to speed up code on various platforms. IMHO, given what we try to encourage as best practices, it seems fundamentally wrong deliberately tell others (who seem to want to work with the community, and are otherwise likely to be good contributors): "yeah, we really don't want you to contribute to LLVM upstream" or "Yeah, well, we'll take your general contributions, but we don't want the rest of your stuff, no matter how well designed/architected". Attracting contributors is a two way street that requires a show of good faith and understanding on both sides. The one problem you identify, the burden of running code on backends for proprietary hardware, seems pretty tractable to solve. For example, one possible option: We don't let people revert patches for *runtime* failures in backends that nobody can run. (IE you update the API, try to do the right thing, and if it passes the regression tests, that's that. Even for code gen tests, if it's not obvious what the right code to generate is, that's the maintainers problem). So they get the benefit of the API updates people can perform, etc. We don't end up with the burden of trying to work to fix stuff at runtime without hardware that can be used, and we get the benefit of what those contributors are willing to contribute. (At the very least, IMHO, it seems worth experimenting to see whether the burdens *do* outweigh the benefits over time. I don't think we *actually* have good data on this, we are all just kind of guessing based on personal experiences).> > I think you’ve summed it up really well here. > > Personally I don’t think we should accept backends for which there is no > way to run the code. The burden (however small) on the community to having > an in-tree backend they can’t use is too high IMO. > > As you point out ‘no way to run the code’ may mean not having access to > HW, or having HW but no API. > > NVPTX is a good example. Now you can take the output from LLVM and run it > on HW. It may or may not be how Nvidia do it in their code, but that > doesn’t matter, you can do it. Same for AMDGPU. > > So -1 from me to having backends we can’t make use of. > > Finally, one option is to have perpetually experimental backends. Then > all the code is in tree but no-one in tree should ever be expected to > update it. That does have the big advantage that all of the code is there > to discuss and the maintainers can make contributions to common code and > gain/provide help in the community. They can also be involved in > discussions which impact them such as changes to common code. > > Cheers, > Pete > > On Feb 9, 2016, at 4:18 PM, Sean Silva via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > One data point (IIRC) is that the NVPTX backend sat in tree for a long > time without a way to actually use them. But lately this has been opening > up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html). However, the > obstacle for NVPTX was mostly a software proprietary-ness (no way to plug > it into the driver stack really, except via nvidia's own proprietary > software), whereas the actual hardware was available. For the Lanai stuff, > it seems like the hardware is fundamentally not available for purchase. > > The reverse situation is with e.g. Apple's GPU backends, where the devices > are readily available, but (AFAIK) even if the backend were open-source you > couldn't run the code produced by the open-source compiler. > > Or to put it in matrix form (this is all heavily prefixed by "AFAIK"; > corrections welcome): > > AMDGPU: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes > NVPTX: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes > Lanai: InTree:? DevicesAvailable:No CanIRunTheCode:No > Apple GPU's: InTree:No DevicesAvailable:Yes CanIRunTheCode:No > > I couldn't come up with a good name for "Can I Run The Code" column. > Basically it means: "assuming the backend were in open source, could I > actually run the code produced by the open source backend somehow?". > > I had a quick look at lib/Target and it seems like every backend we have > has "CanIRunTheCode:Yes" in theory. > IIRC, the NVPTX stuff used to actually be "No" though? > > Anyway, just a random thought. Not sure what the conclusion is. > > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160209/eca525d1/attachment.html>
On Tue, Feb 9, 2016 at 8:59 PM, Pete Cooper <peter_cooper at apple.com> wrote:> Hi Sean > > I think you’ve summed it up really well here. > > Personally I don’t think we should accept backends for which there is no > way to run the code. The burden (however small) on the community to having > an in-tree backend they can’t use is too high IMO. > > As you point out ‘no way to run the code’ may mean not having access to > HW, or having HW but no API. > > NVPTX is a good example. Now you can take the output from LLVM and run it > on HW. It may or may not be how Nvidia do it in their code, but that > doesn’t matter, you can do it. Same for AMDGPU. >One thing to note is that all the leg work for getting CUDA working with the open source toolchain (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html) was done by Googlers (Jingyue Wu, Artem Belevich, and probably others I don't remember off the top of my head). In my experience with LLVM, the Google folks have been first-rate open source citizens. Unless Jacques' team contributing here is from a drastically different organization/culture within google (idk, are they?), I have full faith that this is meant with the best intentions and won't go the way of a "code drop".> > So -1 from me to having backends we can’t make use of. >I'm not sure I agree on that. Assume for a second that we are in a hypothetical world where the reasons for keeping the Apple GPU backends private vanished (but the actual driver stack was still locked down; i.e. CanIRunTheCode is still "No"). I would personally say that it would be beneficial for LLVM to have those backends developed upstream if only so that we can have Owen's team participating upstream more, as their expertise is a huge asset to the community. -- Sean Silva> > Finally, one option is to have perpetually experimental backends. Then > all the code is in tree but no-one in tree should ever be expected to > update it. That does have the big advantage that all of the code is there > to discuss and the maintainers can make contributions to common code and > gain/provide help in the community. They can also be involved in > discussions which impact them such as changes to common code. > > Cheers, > Pete > > On Feb 9, 2016, at 4:18 PM, Sean Silva via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > One data point (IIRC) is that the NVPTX backend sat in tree for a long > time without a way to actually use them. But lately this has been opening > up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html). However, the > obstacle for NVPTX was mostly a software proprietary-ness (no way to plug > it into the driver stack really, except via nvidia's own proprietary > software), whereas the actual hardware was available. For the Lanai stuff, > it seems like the hardware is fundamentally not available for purchase. > > The reverse situation is with e.g. Apple's GPU backends, where the devices > are readily available, but (AFAIK) even if the backend were open-source you > couldn't run the code produced by the open-source compiler. > > Or to put it in matrix form (this is all heavily prefixed by "AFAIK"; > corrections welcome): > > AMDGPU: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes > NVPTX: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes > Lanai: InTree:? DevicesAvailable:No CanIRunTheCode:No > Apple GPU's: InTree:No DevicesAvailable:Yes CanIRunTheCode:No > > I couldn't come up with a good name for "Can I Run The Code" column. > Basically it means: "assuming the backend were in open source, could I > actually run the code produced by the open source backend somehow?". > > I had a quick look at lib/Target and it seems like every backend we have > has "CanIRunTheCode:Yes" in theory. > IIRC, the NVPTX stuff used to actually be "No" though? > > Anyway, just a random thought. Not sure what the conclusion is. > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160209/66d604c6/attachment.html>
On 02/09/2016 08:59 PM, Pete Cooper via llvm-dev wrote:> Hi Sean > > I think you’ve summed it up really well here. > > Personally I don’t think we should accept backends for which there is > no way to run the code. The burden (however small) on the community > to having an in-tree backend they can’t use is too high IMO. > > As you point out ‘no way to run the code’ may mean not having access > to HW, or having HW but no API. > > NVPTX is a good example. Now you can take the output from LLVM and > run it on HW. It may or may not be how Nvidia do it in their code, > but that doesn’t matter, you can do it. Same for AMDGPU. > > So -1 from me to having backends we can’t make use of.For the record, I strongly disagree with this position. I understand where you're coming from, but I see great value in having backends publicly available even for hardware we can't directly run. I do see the support concerns and we need to address them, but the utter rejection of backends based on the non-runnable nature is something I strongly disagree with. To layout a couple of benefits that no one has mentioned so far: 1) This is a highly visible clue as to what Google is running internally (admittedly, we don't know for what). Given how secretive companies tend to be about such things, providing an incentive (upstreaming) to talk publicly about internal infrastructure is valuable. I could see that being very useful to academics evaluating hardware ideas for instance. 2) Just because a backend generates code which isn't "officially" runnable doesn't mean there aren't people who'd be interested in using it. For instance, reverse engineering for security analysis, open source software on otherwise closed hardware, and supporting legacy products after the manufacture drops support are all realistic use cases. 3) By getting more people involved in the open source project, we have an opportunity to further infect people inside companies with the desire to be good citizens in the community. :) That could be a very positive thing for all of us and the project as a whole in the long run.> > Finally, one option is to have perpetually experimental backends. > Then all the code is in tree but no-one in tree should ever be > expected to update it. That does have the big advantage that all of > the code is there to discuss and the maintainers can make > contributions to common code and gain/provide help in the community. > They can also be involved in discussions which impact them such as > changes to common code. > > Cheers, > Pete >> On Feb 9, 2016, at 4:18 PM, Sean Silva via llvm-dev >> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> >> One data point (IIRC) is that the NVPTX backend sat in tree for a >> long time without a way to actually use them. But lately this has >> been opening up (e.g. http://llvm.org/docs/CompileCudaWithLLVM.html). >> However, the obstacle for NVPTX was mostly a software >> proprietary-ness (no way to plug it into the driver stack really, >> except via nvidia's own proprietary software), whereas the actual >> hardware was available. For the Lanai stuff, it seems like the >> hardware is fundamentally not available for purchase. >> >> The reverse situation is with e.g. Apple's GPU backends, where the >> devices are readily available, but (AFAIK) even if the backend were >> open-source you couldn't run the code produced by the open-source >> compiler. >> >> Or to put it in matrix form (this is all heavily prefixed by "AFAIK"; >> corrections welcome): >> >> AMDGPU: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes >> NVPTX: InTree:Yes DevicesAvailable:Yes CanIRunTheCode:Yes >> Lanai: InTree:? DevicesAvailable:No CanIRunTheCode:No >> Apple GPU's: InTree:No DevicesAvailable:Yes CanIRunTheCode:No >> I couldn't come up with a good name for "Can I Run The Code" column. >> Basically it means: "assuming the backend were in open source, could >> I actually run the code produced by the open source backend somehow?". >> >> I had a quick look at lib/Target and it seems like every backend we >> have has "CanIRunTheCode:Yes" in theory. >> IIRC, the NVPTX stuff used to actually be "No" though? >> >> Anyway, just a random thought. Not sure what the conclusion is. > > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160210/99b1fe35/attachment.html>