similar to: [LLVMdev] translating from OpenMP to CUDA

Displaying 20 results from an estimated 4000 matches similar to: "[LLVMdev] translating from OpenMP to CUDA"

2012 Nov 09
0
[LLVMdev] translating from OpenMP to CUDA
The PTX back-end is robust (it's based on the sources used by nvcc), but I'm not sure about the OpenMP representation in LLVM IR. I believe the OpenMP constructs are already lowered into libgomp calls before leaving DragonEgg. It's been awhile since I've loooked at it though. If you use the PTX back-end and have any issues, please don't hesitate to post to the list and cc:
2012 Sep 07
2
[LLVMdev] counting branch frequencies
Hi, Is there a way to count branch frequencies using LLVM infrastructure? Thanks. -Apala Postdoctoral Scholar Department of Computer Science, University of Chicago Computation Institute, Argonne National Laboratory http://sites.google.com/site/apalaguha/home/ -------------- next part -------------- An HTML attachment was scrubbed... URL:
2015 Aug 21
2
[CUDA/NVPTX] is inlining __syncthreads allowed?
I'm using 7.0. I am attaching the reduced example. nvcc sync.cu -arch=sm_35 -ptx gives // .globl _Z3foov .visible .entry _Z3foov( ) { .reg .pred %p<2>; .reg .s32 %r<3>; mov.u32 %r1, %tid.x; and.b32 %r2, %r1, 1; setp.eq.b32 %p1, %r2, 1; @!%p1 bra BB7_2; bra.uni
2011 Aug 15
2
[LLVMdev] Cuda programs on LLVM
Hello , How to execute a cuda program using llvm? More specifically, nvcc produces some temporary files during its compilation. I want to convert the .cu.cpp to .ll format and optimize it. The .cu.cpp file contains typedefs and enums used by cuda runtime and also the host part of the code and the ptx file contains the kernel definition. How can i run the program after optimization? Will Rhodin
2011 Aug 15
0
[LLVMdev] Cuda programs on LLVM
Hi Adarsh, to my knowledge there is no publicly available CUDA-Frontend for LLVM yet. The work of Helge Rhodin you mentioned is on the backend-side: It allows to generate PTX code from LLVM IR. It is still being maintained, although I think the currently available source code is a little outdated. There is also a PTX backend in the current version of LLVM that makes use of LLVM's
2015 Aug 21
3
[CUDA/NVPTX] is inlining __syncthreads allowed?
Hi Justin, Is a compiler allowed to inline a function that calls __syncthreads? I saw nvcc does that, but not sure it's valid though. For example, void foo() { __syncthreads(); } if (threadIdx.x % 2 == 0) { ... foo(); } else { ... foo(); } Before inlining, all threads meet at one __syncthreads(). After inlining if (threadIdx.x % 2 == 0) { ... __syncthreads(); } else { ...
2012 Oct 04
2
[LLVMdev] library functions
Hi, Is there any way to analyze library functions using LLVM, in the same manner as source code functions? Thanks. -Apala
2016 Oct 14
2
LLVM/CLANG: CUDA compilation fail for inline assembly code
Hi, I am sorry for sending this query again here, but maybe I sent it to wrong list yesterday. I am trying to compile LonestarGPU-rev2.0 <http://iss.ices.utexas.edu/?p=projects/galois/lonestargpu/download> benchmark suite with LLVM/CLANG. This suite has a following piece of code (more info here
2012 Oct 05
0
[LLVMdev] library functions
Hi , I doubt LLVM has the infrastructure in place to do so ,One way to accomplish this by implementing decompiler to convert library functions to LLVM IR and run the LLVM analyze pass over converted LLVM IR ,Then revert back from LLVM IR to your library format. Thanks ~Umesh On Thu, Oct 4, 2012 at 9:47 PM, apala guha <aguha at uchicago.edu> wrote: > Hi, > > Is there any
2016 Jun 02
3
PTX generation from CUDA file for compute capability 1.0 (sm_10)
Hello Bergström/Eric, Thanks for the reply. The G80(sm_10) architecture was ported on FPGA by a group of researchers (http://www.ecs.umass.edu/ece/tessier/andryc-fpt13.pdf). Our group have some further research interest on this work. I was working on modifying the Clang-LLVM for a couple of months and achieved the required changes. But Clang-LLVM is only allowing me to generate PTX for sm_20,
2012 Apr 27
2
[LLVMdev] [llvm-commits] [PATCH][RFC] NVPTX Backend
Thanks for the feedback! The attached patch addresses the style issues that have been found. From: Jim Grosbach [mailto:grosbach at apple.com] Sent: Wednesday, April 25, 2012 2:22 PM To: Justin Holewinski Cc: llvm-commits at cs.uiuc.edu; llvmdev at cs.uiuc.edu; Vinod Grover Subject: Re: [llvm-commits] [PATCH][RFC] NVPTX Backend Hi Justin, Cool stuff, to be sure. Excited to see this. As a
2016 Oct 27
3
problem on compiling cuda program with clang++
Hi all, I compiled the *llvm3.9* source code on the *Nvidia TX1* board. And now I am following the document in the docs/CompileCudaWithLLVM.rst to compile cuda program with clang++. However, when I compile `axpy.cu` using `nvcc`, *nvcc* can generate the correct the binary; while compiling `axpy.cu` using clang++, the detailed command is `clang++ axpy.cu -o axpy --cuda-gpu-arch=sm_53
2018 Dec 14
8
Debug info for CUDA code
Are you planning to release this as soon as it's ready or you want to make it into a major release? Is it possible to let me know (maybe by replying to this thread) once the code is ready? I know sometimes it takes a while to get things in the major release. I greatly appreciate your work on this! Thanks, Char 在 2018-12-15 05:19:50,"Alexey Bataev" <a.bataev at outlook.com>
2013 Jul 18
2
question about Makeconf and nvcc/CUDA
Dear R development: I'm not sure if this is the appropriate list, but it's a start. I would like to put together a package which contains a CUDA program on Windows 7. I believe that it has to do with the Makeconf file in the etc directory. But when I just use the nvcc with the shared option, I can use the dyn.load command, but when I use the is.loaded function, it shows FALSE.
2015 Apr 08
5
[LLVMdev] CUDA front-end (CUDA to LLVM IR)
Hi, I wanted to ask whether there is ongoing effort (or an already established tool) that enables to convert CUDA kernels (that uses CUDA specific intrinsics, e.g., threadId.x, __syncthreads(), ...) to LLVM IR. I am aware that I can do this for OpenCL with the help of libclc but I can not find something similar for CUDA. Thanks -------------- next part -------------- An HTML attachment was
2011 Dec 07
2
[LLVMdev] Build PTX samples with LLVM/Clang/libclc
Hi Justin, I download llvm-ptx-samples [1] and try to build them. I found it seems lack of a complete document on how to build them with LLVM/Clang/libclc. Do you think it's a good idea to put a complete document/tutorial in _one_ place? Currently, there are your website [2], LLVM [3], Clang and libclc websites [5] over there. I feel people might get lost among those websites. ;-) Here
2015 Apr 08
2
[LLVMdev] CUDA front-end (CUDA to LLVM IR)
On Wed, Apr 8, 2015 at 10:12 AM, Dmitry Mikushin <dmitry at kernelgen.org> wrote: > A tool of this kind here: https://github.com/apc-llc/nvcc-llvm-ir > > 2015-04-08 19:01 GMT+02:00 Ahmed ElTantawy <ahmede at ece.ubc.ca>: > >> Hi, >> >> I wanted to ask whether there is ongoing effort (or an already >> established tool) that enables to convert CUDA
2017 Sep 27
2
OrcJIT + CUDA Prototype for Cling
Dear LLVM-Developers and Vinod Grover, we are trying to extend the cling C++ interpreter (https://github.com/root-project/cling) with CUDA functionality for Nvidia GPUs. I already developed a prototype based on OrcJIT and am seeking for feedback. I am currently a stuck with a runtime issue, on which my interpreter prototype fails to execute kernels with a CUDA runtime error. === How to use the
2019 Jan 23
2
Debug info for CUDA code
Hi Char, I found the problem, for some reason the last patch was applied correctly. Just committed the fixed version. Tried to compile axpy.cu, everything works. ------------- Best regards, Alexey Bataev 23.01.2019 13:37, treinz пишет: > Hi Alexey, > > I tried the b7195a6 from the llvm github mirror, which does include > your commit D46189 <https://reviews.llvm.org/D46189> (see
2018 Jun 21
2
NVPTX - Reordering load instructions
Hi all, I'm looking into the performance difference of a benchmark compiled with NVCC vs NVPTX (coming from Julia, not CUDA C) and I'm seeing a significant difference due to PTX instruction ordering. The relevant source code consists of two nested loops that get fully unrolled, doing some basic arithmetic with values loaded from shared memory: > #define BLOCK_SIZE 16 > >