Cownie, James H via llvm-dev
2016-Mar-14 17:10 UTC
[llvm-dev] [cfe-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries
> I'd support some of Jame's comments if liboffload wasn't glued to OMP as it is now.I certainly have no objection to moving liboffload elsewhere if that makes it more useful to people. There is no real "glue" holding it there; it simply ended up in the OpenMP directory structure because that was an easy place to put it, not because that's the optimal place for it. To some extent it has stayed there because no-one has put in any effort to do the work to move it. -- Jim James Cownie <james.h.cownie at intel.com> SSG/DPD/TCAR (Technical Computing, Analyzers and Runtimes) Tel: +44 117 9071438 -----Original Message----- From: C Bergström [mailto:cbergstrom at pathscale.com] Sent: Monday, March 14, 2016 5:01 PM To: Cownie, James H <james.h.cownie at intel.com> Cc: llvm-dev <llvm-dev at lists.llvm.org>; cfe-dev <cfe-dev at lists.llvm.org>; openmp-dev at lists.llvm.org Subject: Re: [cfe-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries I'd support some of Jame's comments if liboffload wasn't glued to OMP as it is now. My attempts to decouple it into something with better design layering and outside of OMP source repo, have failed. For it to be advocated as "the" offload lib - it needs a home (imnsho) outside of OMP. Somewhere that others can easily play with it and not pay the OMP tax. It may tick some of the boxes which have been mentioned, but I'm curious how well it does when put under real workloads. On Tue, Mar 15, 2016 at 12:53 AM, Cownie, James H via cfe-dev <cfe-dev at lists.llvm.org> wrote:> Jason, > > > > It’s great that Google are interested in contributing to the development of > LLVM in this area, and that you have code to support offload. > > However, I’m not sure that all of it is needed, since LLVM already has the > offload library which has been being developed in the context of OpenMP, but > actually provides a general facility. It has been a part of LLVM since April > 2014, and is already being used to offload to both Intel Xeon Phi and (at > least NVidia) GPUs. (The IBM folks can tell you more about that!) > > > > The main difference I see (at a very first glance!) is that your > StreamExecutor interfaces seem to be aimed more at end user code, whereas > the interface to the existing offload library has not been designed for the > user, but to be an interface from the compiler. That has advantages and > disadvantages > > Advantages: > > · It is a C level interface, so is callable from C,C++ and Fortran > > Disadvantages: > > · Using it directly from C++ user code may be harder than using > StreamExecutor. > > > > However, there is nothing in the interface that prevents it from being used > with CUDA or OpenCL, and it already seems to support the low level features > you cited as StreamExecutor’s advantages, though not the “looks just like > CUDA” aspects, since it’s explicitly vendor neutral. > > > >> StreamExecutor: > >> > >> * abstracts the underlying accelerator platform (avoids locking you into a > >> single vendor, and lets you write code without thinking about which > >> platform you'll be running on). > > Liboffload does this (and has a specific design for how to abstract new > devices and support them using device specific libraries). > >> * provides an open-source alternative to the CUDA runtime library. > > I am not a CUDA expert, so I can’t comment on this! As before, IBM should > comment. > >> * gives users a stream management model whose terminology matches that of >> the CUDA programming model. > > This is not abstract, but seems CUDA target specific, which is, if anything, > worrying for a supposedly vendor-neutral interface! > >> * makes use of modern C++ to create a safe, efficient, easy-to-use >> programming interface. > > No, because liboffload is an implementation layer, not intended to be > user-visible. > > > >> StreamExecutor makes it easy to: > >> > >> * move data between host and accelerator (and also between peer >> accelerators). > > Liboffload supports this. > >> * execute data-parallel kernels written in the OpenCL or CUDA kernel >> languages. > > I believe this should be easy; IBM can comment better, since they have been > working on GPU support. > >> * inspect the capabilities of a GPU-like device at runtime. > >> * manage multiple devices. > > Liboffload supports this. > > > > We’d therefore be very interested in seeing an approach that implemented a > C++ specific user-friendly interface on top of the existing liboffload > functionality, but we don’t see a reason to rework the OpenMP implementation > to use StreamExecutor (since what LLVM already has is working fine, and > supporting offload to both GPUs and Xeon Phi). > > > > -- Jim > > James Cownie <james.h.cownie at intel.com> > SSG/DPD/TCAR (Technical Computing, Analyzers and Runtimes) > > Tel: +44 117 9071438 > > > > --------------------------------------------------------------------- > Intel Corporation (UK) Limited > Registered No. 1134945 (England) > Registered Office: Pipers Way, Swindon SN3 1RJ > VAT No: 860 2173 47 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > cfe-dev mailing list > cfe-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev >--------------------------------------------------------------------- Intel Corporation (UK) Limited Registered No. 1134945 (England) Registered Office: Pipers Way, Swindon SN3 1RJ VAT No: 860 2173 47 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
C Bergström via llvm-dev
2016-Mar-14 17:14 UTC
[llvm-dev] [cfe-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries
/* ignorable rant */ I've publicly advocated it shouldn't have been there in the 1st place. I have been quite vocal the work wasn't for everyone else to pay, but should have been part of the initial design. (Basically getting it right the 1st time - instead of forcing someone else to wade through a bunch of cmake) On Tue, Mar 15, 2016 at 1:10 AM, Cownie, James H <james.h.cownie at intel.com> wrote:>> I'd support some of Jame's comments if liboffload wasn't glued to OMP as it is now. > > I certainly have no objection to moving liboffload elsewhere if that makes it more useful to people. > There is no real "glue" holding it there; it simply ended up in the OpenMP directory structure because that > was an easy place to put it, not because that's the optimal place for it. > > To some extent it has stayed there because no-one has put in any effort to do the work to move it. > > -- Jim > > James Cownie <james.h.cownie at intel.com> > SSG/DPD/TCAR (Technical Computing, Analyzers and Runtimes) > Tel: +44 117 9071438 > > -----Original Message----- > From: C Bergström [mailto:cbergstrom at pathscale.com] > Sent: Monday, March 14, 2016 5:01 PM > To: Cownie, James H <james.h.cownie at intel.com> > Cc: llvm-dev <llvm-dev at lists.llvm.org>; cfe-dev <cfe-dev at lists.llvm.org>; openmp-dev at lists.llvm.org > Subject: Re: [cfe-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries > > I'd support some of Jame's comments if liboffload wasn't glued to OMP > as it is now. My attempts to decouple it into something with better > design layering and outside of OMP source repo, have failed. For it to > be advocated as "the" offload lib - it needs a home (imnsho) outside > of OMP. Somewhere that others can easily play with it and not pay the > OMP tax. It may tick some of the boxes which have been mentioned, but > I'm curious how well it does when put under real workloads. > > On Tue, Mar 15, 2016 at 12:53 AM, Cownie, James H via cfe-dev > <cfe-dev at lists.llvm.org> wrote: >> Jason, >> >> >> >> It’s great that Google are interested in contributing to the development of >> LLVM in this area, and that you have code to support offload. >> >> However, I’m not sure that all of it is needed, since LLVM already has the >> offload library which has been being developed in the context of OpenMP, but >> actually provides a general facility. It has been a part of LLVM since April >> 2014, and is already being used to offload to both Intel Xeon Phi and (at >> least NVidia) GPUs. (The IBM folks can tell you more about that!) >> >> >> >> The main difference I see (at a very first glance!) is that your >> StreamExecutor interfaces seem to be aimed more at end user code, whereas >> the interface to the existing offload library has not been designed for the >> user, but to be an interface from the compiler. That has advantages and >> disadvantages >> >> Advantages: >> >> · It is a C level interface, so is callable from C,C++ and Fortran >> >> Disadvantages: >> >> · Using it directly from C++ user code may be harder than using >> StreamExecutor. >> >> >> >> However, there is nothing in the interface that prevents it from being used >> with CUDA or OpenCL, and it already seems to support the low level features >> you cited as StreamExecutor’s advantages, though not the “looks just like >> CUDA” aspects, since it’s explicitly vendor neutral. >> >> >> >>> StreamExecutor: >> >>> >> >>> * abstracts the underlying accelerator platform (avoids locking you into a >> >>> single vendor, and lets you write code without thinking about which >> >>> platform you'll be running on). >> >> Liboffload does this (and has a specific design for how to abstract new >> devices and support them using device specific libraries). >> >>> * provides an open-source alternative to the CUDA runtime library. >> >> I am not a CUDA expert, so I can’t comment on this! As before, IBM should >> comment. >> >>> * gives users a stream management model whose terminology matches that of >>> the CUDA programming model. >> >> This is not abstract, but seems CUDA target specific, which is, if anything, >> worrying for a supposedly vendor-neutral interface! >> >>> * makes use of modern C++ to create a safe, efficient, easy-to-use >>> programming interface. >> >> No, because liboffload is an implementation layer, not intended to be >> user-visible. >> >> >> >>> StreamExecutor makes it easy to: >> >>> >> >>> * move data between host and accelerator (and also between peer >>> accelerators). >> >> Liboffload supports this. >> >>> * execute data-parallel kernels written in the OpenCL or CUDA kernel >>> languages. >> >> I believe this should be easy; IBM can comment better, since they have been >> working on GPU support. >> >>> * inspect the capabilities of a GPU-like device at runtime. >> >>> * manage multiple devices. >> >> Liboffload supports this. >> >> >> >> We’d therefore be very interested in seeing an approach that implemented a >> C++ specific user-friendly interface on top of the existing liboffload >> functionality, but we don’t see a reason to rework the OpenMP implementation >> to use StreamExecutor (since what LLVM already has is working fine, and >> supporting offload to both GPUs and Xeon Phi). >> >> >> >> -- Jim >> >> James Cownie <james.h.cownie at intel.com> >> SSG/DPD/TCAR (Technical Computing, Analyzers and Runtimes) >> >> Tel: +44 117 9071438 >> >> >> >> --------------------------------------------------------------------- >> Intel Corporation (UK) Limited >> Registered No. 1134945 (England) >> Registered Office: Pipers Way, Swindon SN3 1RJ >> VAT No: 860 2173 47 >> >> This e-mail and any attachments may contain confidential material for >> the sole use of the intended recipient(s). Any review or distribution >> by others is strictly prohibited. If you are not the intended >> recipient, please contact the sender and delete all copies. >> >> >> _______________________________________________ >> cfe-dev mailing list >> cfe-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev >> > --------------------------------------------------------------------- > Intel Corporation (UK) Limited > Registered No. 1134945 (England) > Registered Office: Pipers Way, Swindon SN3 1RJ > VAT No: 860 2173 47 > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies.
Jason Henline via llvm-dev
2016-Mar-14 17:50 UTC
[llvm-dev] [Openmp-dev] [cfe-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries
I think it would be great if StreamExecutor could use liboffload to perform its offloading under the hood. Right now offloading is handled in StreamExecutor using platform plugins, so I think it could be very natural for us to write a plugin which basically forwards to liboffload. If that worked out, we could delete our current plugins and depend only on those based on liboffload, and then all the offloading code would be unified. Then, just as James said, StreamExecutor would provide a nice C++ interface on top of liboffload, and liboffload could continue to support OpenMP directly. In this plan, I think it would make sense to move liboffload to the new project being proposed by this RFC, and hopefully that would also make liboffload more usable as a stand-alone project. Before moving forward with any of these plans, I think it is right to wait to hear what IBM thinks. On Mon, Mar 14, 2016 at 10:14 AM C Bergström <openmp-dev at lists.llvm.org> wrote:> /* ignorable rant */ > I've publicly advocated it shouldn't have been there in the 1st place. > I have been quite vocal the work wasn't for everyone else to pay, but > should have been part of the initial design. (Basically getting it > right the 1st time - instead of forcing someone else to wade through a > bunch of cmake) > > On Tue, Mar 15, 2016 at 1:10 AM, Cownie, James H > <james.h.cownie at intel.com> wrote: > >> I'd support some of Jame's comments if liboffload wasn't glued to OMP > as it is now. > > > > I certainly have no objection to moving liboffload elsewhere if that > makes it more useful to people. > > There is no real "glue" holding it there; it simply ended up in the > OpenMP directory structure because that > > was an easy place to put it, not because that's the optimal place for it. > > > > To some extent it has stayed there because no-one has put in any effort > to do the work to move it. > > > > -- Jim > > > > James Cownie <james.h.cownie at intel.com> > > SSG/DPD/TCAR (Technical Computing, Analyzers and Runtimes) > > Tel: +44 117 9071438 > > > > -----Original Message----- > > From: C Bergström [mailto:cbergstrom at pathscale.com] > > Sent: Monday, March 14, 2016 5:01 PM > > To: Cownie, James H <james.h.cownie at intel.com> > > Cc: llvm-dev <llvm-dev at lists.llvm.org>; cfe-dev <cfe-dev at lists.llvm.org>; > openmp-dev at lists.llvm.org > > Subject: Re: [cfe-dev] RFC: Proposing an LLVM subproject for parallelism > runtime and support libraries > > > > I'd support some of Jame's comments if liboffload wasn't glued to OMP > > as it is now. My attempts to decouple it into something with better > > design layering and outside of OMP source repo, have failed. For it to > > be advocated as "the" offload lib - it needs a home (imnsho) outside > > of OMP. Somewhere that others can easily play with it and not pay the > > OMP tax. It may tick some of the boxes which have been mentioned, but > > I'm curious how well it does when put under real workloads. > > > > On Tue, Mar 15, 2016 at 12:53 AM, Cownie, James H via cfe-dev > > <cfe-dev at lists.llvm.org> wrote: > >> Jason, > >> > >> > >> > >> It’s great that Google are interested in contributing to the > development of > >> LLVM in this area, and that you have code to support offload. > >> > >> However, I’m not sure that all of it is needed, since LLVM already has > the > >> offload library which has been being developed in the context of > OpenMP, but > >> actually provides a general facility. It has been a part of LLVM since > April > >> 2014, and is already being used to offload to both Intel Xeon Phi and > (at > >> least NVidia) GPUs. (The IBM folks can tell you more about that!) > >> > >> > >> > >> The main difference I see (at a very first glance!) is that your > >> StreamExecutor interfaces seem to be aimed more at end user code, > whereas > >> the interface to the existing offload library has not been designed for > the > >> user, but to be an interface from the compiler. That has advantages and > >> disadvantages > >> > >> Advantages: > >> > >> · It is a C level interface, so is callable from C,C++ and > Fortran > >> > >> Disadvantages: > >> > >> · Using it directly from C++ user code may be harder than using > >> StreamExecutor. > >> > >> > >> > >> However, there is nothing in the interface that prevents it from being > used > >> with CUDA or OpenCL, and it already seems to support the low level > features > >> you cited as StreamExecutor’s advantages, though not the “looks just > like > >> CUDA” aspects, since it’s explicitly vendor neutral. > >> > >> > >> > >>> StreamExecutor: > >> > >>> > >> > >>> * abstracts the underlying accelerator platform (avoids locking you > into a > >> > >>> single vendor, and lets you write code without thinking about which > >> > >>> platform you'll be running on). > >> > >> Liboffload does this (and has a specific design for how to abstract new > >> devices and support them using device specific libraries). > >> > >>> * provides an open-source alternative to the CUDA runtime library. > >> > >> I am not a CUDA expert, so I can’t comment on this! As before, IBM > should > >> comment. > >> > >>> * gives users a stream management model whose terminology matches that > of > >>> the CUDA programming model. > >> > >> This is not abstract, but seems CUDA target specific, which is, if > anything, > >> worrying for a supposedly vendor-neutral interface! > >> > >>> * makes use of modern C++ to create a safe, efficient, easy-to-use > >>> programming interface. > >> > >> No, because liboffload is an implementation layer, not intended to be > >> user-visible. > >> > >> > >> > >>> StreamExecutor makes it easy to: > >> > >>> > >> > >>> * move data between host and accelerator (and also between peer > >>> accelerators). > >> > >> Liboffload supports this. > >> > >>> * execute data-parallel kernels written in the OpenCL or CUDA kernel > >>> languages. > >> > >> I believe this should be easy; IBM can comment better, since they have > been > >> working on GPU support. > >> > >>> * inspect the capabilities of a GPU-like device at runtime. > >> > >>> * manage multiple devices. > >> > >> Liboffload supports this. > >> > >> > >> > >> We’d therefore be very interested in seeing an approach that > implemented a > >> C++ specific user-friendly interface on top of the existing liboffload > >> functionality, but we don’t see a reason to rework the OpenMP > implementation > >> to use StreamExecutor (since what LLVM already has is working fine, and > >> supporting offload to both GPUs and Xeon Phi). > >> > >> > >> > >> -- Jim > >> > >> James Cownie <james.h.cownie at intel.com> > >> SSG/DPD/TCAR (Technical Computing, Analyzers and Runtimes) > >> > >> Tel: +44 117 9071438 > >> > >> > >> > >> --------------------------------------------------------------------- > >> Intel Corporation (UK) Limited > >> Registered No. 1134945 (England) > >> Registered Office: Pipers Way, Swindon SN3 1RJ > >> VAT No: 860 2173 47 > >> > >> This e-mail and any attachments may contain confidential material for > >> the sole use of the intended recipient(s). Any review or distribution > >> by others is strictly prohibited. If you are not the intended > >> recipient, please contact the sender and delete all copies. > >> > >> > >> _______________________________________________ > >> cfe-dev mailing list > >> cfe-dev at lists.llvm.org > >> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev > >> > > --------------------------------------------------------------------- > > Intel Corporation (UK) Limited > > Registered No. 1134945 (England) > > Registered Office: Pipers Way, Swindon SN3 1RJ > > VAT No: 860 2173 47 > > > > This e-mail and any attachments may contain confidential material for > > the sole use of the intended recipient(s). Any review or distribution > > by others is strictly prohibited. If you are not the intended > > recipient, please contact the sender and delete all copies. > _______________________________________________ > Openmp-dev mailing list > Openmp-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/openmp-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160314/ed721cba/attachment.html>
Reasonably Related Threads
- [cfe-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries
- RFC: Proposing an LLVM subproject for parallelism runtime and support libraries
- [Openmp-dev] [cfe-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries
- [cfe-dev] [Openmp-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries
- [cfe-dev] [Openmp-dev] RFC: Proposing an LLVM subproject for parallelism runtime and support libraries