There certainly is support; after all AMD supports both OpenCL and HIP (a dialect of C++ very close to cuda). AMD device libraries (in bitcode form) are installed when ROCm ( https://rocm.github.io/ ) is installed. AMD device libraries are mostly written in (OpenCL) C and open source at https://github.com/RadeonOpenCompute/ROCm-Device-Libs . They are configured by linking in a number tiny libraries that define global constants; these allow unwanted code including branches to be eliminated during post-bitcode-link optimization. Regards, Brian -----Original Message----- From: Nicolai Hähnle <nhaehnle at gmail.com> Sent: Wednesday, November 13, 2019 12:57 PM To: Frank Winter <fwinter at jlab.org>; Sumner, Brian <Brian.Sumner at amd.com> Cc: LLVM Dev <llvm-dev at lists.llvm.org> Subject: Re: [llvm-dev] AMDGPU and math functions [CAUTION: External Email] Brian, this seems like a good question for you. On Wed, Nov 13, 2019 at 9:51 PM Frank Winter via llvm-dev <llvm-dev at lists.llvm.org> wrote:> > Does anyone know whether there is yet support for math functions in > AMD GPU kernels? > > In the NVIDIA world they provide the libdevice IR module which can be > linked to an existing module containing the kernel. In other words > they provide all math functions on IR level. NVIDIA even claims that > libdevice is actually device specific (compute capability). > > I was wondering how that is done on the AMD side of things. > > Thanks, > > Frank > > > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-- Lerne, wie die Welt wirklich ist, aber vergiss niemals, wie sie sein sollte.
Frank Winter via llvm-dev
2019-Nov-13 21:27 UTC
[llvm-dev] [EXTERNAL] RE: AMDGPU and math functions
Thank! So, support for the math functions seems to be there. That's good new. This brings me to the 2nd point that I need to figure out in order for our application to have a chance to run on AMD GPUs. The thing I am looking for (and could not find out so far) is what AMD's equivalent would be to NVIDIA's driver interface. I am speaking -lcuda as opposed to the runtime -lrtcuda. Our applications loads dynamically (!) an GPU ISA kernel and launches it. In the NVIDIA world there's a function called "cuModuleLoadData" that allows to load a kernel in PTX and returns a CUfunction. From what I have seen so far on the AMD side it looks like as all compilers target GPU ISA directly. Namely the HCC and the AMDGPU backend. Which wouldn't be a problem as long as those generated kernels can be dynamically loaded afterwards. Is there some library for AMD similar to the NVIDIA driver interface that let's the user load an external kernel, say a kernel that was compiled by the AMDGPU backend? Thanks, Frank On 11/13/19 4:10 PM, Sumner, Brian wrote:> There certainly is support; after all AMD supports both OpenCL and HIP (a dialect of C++ very close to cuda). > > AMD device libraries (in bitcode form) are installed when ROCm ( https://urldefense.proofpoint.com/v2/url?u=https-3A__rocm.github.io_&d=DwIGaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=tFpAzszScTWMAFcrGFW5xg&m=PD3F0h9iGXKeT3L0iH2LNWyPB-BXjCyrqqaR9qm_3Js&s=EnoheGoCkDPfjtuVwSnlRYwy17joWxRJ5eeBUNlwaaI&e= ) is installed. > > AMD device libraries are mostly written in (OpenCL) C and open source at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_RadeonOpenCompute_ROCm-2DDevice-2DLibs&d=DwIGaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=tFpAzszScTWMAFcrGFW5xg&m=PD3F0h9iGXKeT3L0iH2LNWyPB-BXjCyrqqaR9qm_3Js&s=HmwMQXBUi9igapSDLi3kIsL0QtE1rVSe17iRUuSPH3g&e= . They are configured by linking in a number tiny libraries that define global constants; these allow unwanted code including branches to be eliminated during post-bitcode-link optimization. > > Regards, > Brian > > -----Original Message----- > From: Nicolai Hähnle <nhaehnle at gmail.com> > Sent: Wednesday, November 13, 2019 12:57 PM > To: Frank Winter <fwinter at jlab.org>; Sumner, Brian <Brian.Sumner at amd.com> > Cc: LLVM Dev <llvm-dev at lists.llvm.org> > Subject: Re: [llvm-dev] AMDGPU and math functions > > [CAUTION: External Email] > > Brian, this seems like a good question for you. > > On Wed, Nov 13, 2019 at 9:51 PM Frank Winter via llvm-dev <llvm-dev at lists.llvm.org> wrote: >> Does anyone know whether there is yet support for math functions in >> AMD GPU kernels? >> >> In the NVIDIA world they provide the libdevice IR module which can be >> linked to an existing module containing the kernel. In other words >> they provide all math functions on IR level. NVIDIA even claims that >> libdevice is actually device specific (compute capability). >> >> I was wondering how that is done on the AMD side of things. >> >> Thanks, >> >> Frank >> >> >> >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.llvm.org_cgi-2Dbin_mailman_listinfo_llvm-2Ddev&d=DwIGaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=tFpAzszScTWMAFcrGFW5xg&m=PD3F0h9iGXKeT3L0iH2LNWyPB-BXjCyrqqaR9qm_3Js&s=9R1u0vveFKcbmAbIQHCToppLNire7MFWCzlUyt8lnwM&e> > > -- > Lerne, wie die Welt wirklich ist, > aber vergiss niemals, wie sie sein sollte.
Sumner, Brian via llvm-dev
2019-Nov-13 21:36 UTC
[llvm-dev] [EXTERNAL] RE: AMDGPU and math functions
Hi Frank, If you want to do cuda-like programming or use cuda-like facilities on AMD, then I'd recommend familiarizing yourself with HIP. The code, including tests and examples, and documentation is at https://github.com/ROCm-Developer-Tools/HIP . There's also a porting guide and pointers to other documentation here: https://rocm-documentation.readthedocs.io/en/latest/Programming_Guides/HIP-porting-guide.html Regards, Brian -----Original Message----- From: Frank Winter <fwinter at jlab.org> Sent: Wednesday, November 13, 2019 1:28 PM To: Sumner, Brian <Brian.Sumner at amd.com>; Nicolai Hähnle <nhaehnle at gmail.com> Cc: LLVM Dev <llvm-dev at lists.llvm.org> Subject: Re: [EXTERNAL] RE: [llvm-dev] AMDGPU and math functions [CAUTION: External Email] Thank! So, support for the math functions seems to be there. That's good new. This brings me to the 2nd point that I need to figure out in order for our application to have a chance to run on AMD GPUs. The thing I am looking for (and could not find out so far) is what AMD's equivalent would be to NVIDIA's driver interface. I am speaking -lcuda as opposed to the runtime -lrtcuda. Our applications loads dynamically (!) an GPU ISA kernel and launches it. In the NVIDIA world there's a function called "cuModuleLoadData" that allows to load a kernel in PTX and returns a CUfunction. From what I have seen so far on the AMD side it looks like as all compilers target GPU ISA directly. Namely the HCC and the AMDGPU backend. Which wouldn't be a problem as long as those generated kernels can be dynamically loaded afterwards. Is there some library for AMD similar to the NVIDIA driver interface that let's the user load an external kernel, say a kernel that was compiled by the AMDGPU backend? Thanks, Frank On 11/13/19 4:10 PM, Sumner, Brian wrote:> There certainly is support; after all AMD supports both OpenCL and HIP (a dialect of C++ very close to cuda). > > AMD device libraries (in bitcode form) are installed when ROCm ( https://urldefense.proofpoint.com/v2/url?u=https-3A__rocm.github.io_&d=DwIGaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=tFpAzszScTWMAFcrGFW5xg&m=PD3F0h9iGXKeT3L0iH2LNWyPB-BXjCyrqqaR9qm_3Js&s=EnoheGoCkDPfjtuVwSnlRYwy17joWxRJ5eeBUNlwaaI&e= ) is installed. > > AMD device libraries are mostly written in (OpenCL) C and open source at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_RadeonOpenCompute_ROCm-2DDevice-2DLibs&d=DwIGaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=tFpAzszScTWMAFcrGFW5xg&m=PD3F0h9iGXKeT3L0iH2LNWyPB-BXjCyrqqaR9qm_3Js&s=HmwMQXBUi9igapSDLi3kIsL0QtE1rVSe17iRUuSPH3g&e= . They are configured by linking in a number tiny libraries that define global constants; these allow unwanted code including branches to be eliminated during post-bitcode-link optimization. > > Regards, > Brian > > -----Original Message----- > From: Nicolai Hähnle <nhaehnle at gmail.com> > Sent: Wednesday, November 13, 2019 12:57 PM > To: Frank Winter <fwinter at jlab.org>; Sumner, Brian > <Brian.Sumner at amd.com> > Cc: LLVM Dev <llvm-dev at lists.llvm.org> > Subject: Re: [llvm-dev] AMDGPU and math functions > > [CAUTION: External Email] > > Brian, this seems like a good question for you. > > On Wed, Nov 13, 2019 at 9:51 PM Frank Winter via llvm-dev <llvm-dev at lists.llvm.org> wrote: >> Does anyone know whether there is yet support for math functions in >> AMD GPU kernels? >> >> In the NVIDIA world they provide the libdevice IR module which can be >> linked to an existing module containing the kernel. In other words >> they provide all math functions on IR level. NVIDIA even claims that >> libdevice is actually device specific (compute capability). >> >> I was wondering how that is done on the AMD side of things. >> >> Thanks, >> >> Frank >> >> >> >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.llvm.org_c >> gi-2Dbin_mailman_listinfo_llvm-2Ddev&d=DwIGaQ&c=CJqEzB1piLOyyvZjb8YUQ >> w&r=tFpAzszScTWMAFcrGFW5xg&m=PD3F0h9iGXKeT3L0iH2LNWyPB-BXjCyrqqaR9qm_ >> 3Js&s=9R1u0vveFKcbmAbIQHCToppLNire7MFWCzlUyt8lnwM&e> > > -- > Lerne, wie die Welt wirklich ist, > aber vergiss niemals, wie sie sein sollte.