similar to: LLVM CUDA: Load/Store operands not captured

Displaying 20 results from an estimated 3000 matches similar to: "LLVM CUDA: Load/Store operands not captured"

2016 Dec 21
0
llvm/cuda: Indentify kernel functions and optimizations
https://github.com/llvm-mirror/llvm/blob/652375a8cc49615de31fd9d424753795059185b6/lib/Target/NVPTX/NVPTXUtilities.h#L58 Does this solve your problem? On Wed, Dec 21, 2016 at 2:29 PM, Gurunath Kadam via llvm-dev < llvm-dev at lists.llvm.org> wrote: > Hi, > > I am trying to instrument CUDA kernel functions only (llvm-3.9.0). > > Is there a way to identify cuda kernel
2016 Dec 21
2
llvm/cuda: Indentify kernel functions and optimizations
Hi, I am trying to instrument CUDA kernel functions only (llvm-3.9.0). Is there a way to identify cuda kernel functions? I see that in llvm IR for CUDA has nvvm annotations section, where kernel functions are identified for NVPTX usage. I can parse the whole IR for this kernel metadata and then proceed, but this is very clumsy. Other way is to work with cuda-device-only IR. But then I am not
2016 Dec 23
0
Assign different RegClasses to a virtual, register based on 'uniform' attribute?
On 2016年12月22日 15:37, via llvm-dev wrote: > Send llvm-dev mailing list submissions to > llvm-dev at lists.llvm.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > or, via email, send a message with subject or body 'help' to > llvm-dev-request at lists.llvm.org > > You can reach the
2016 Oct 14
2
LLVM/CLANG: CUDA compilation fail for inline assembly code
Hi, I am sorry for sending this query again here, but maybe I sent it to wrong list yesterday. I am trying to compile LonestarGPU-rev2.0 <http://iss.ices.utexas.edu/?p=projects/galois/lonestargpu/download> benchmark suite with LLVM/CLANG. This suite has a following piece of code (more info here
2016 Nov 28
2
LLVM Pass for Instructions in Function (error
> On Nov 27, 2016, at 6:40 PM, Gurunath Kadam via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hi Sandeep, > > Thanks. > > One question about: > > Value* AddrPointer = Inst->getIperand(0); > > So this works for LVALUE(S) i.e. in my case pointer on LHS of '='. I cannot find anything online about getloperand online. > > For reference
2016 Nov 28
2
LLVM Pass for Instructions in Function (error
Hi, Sent via the Samsung Galaxy Note® 3, an AT&T 4G LTE smartphone -------- Original message -------- From: Gurunath Kadam via llvm-dev <llvm-dev at lists.llvm.org> Date: 11/27/2016 7:49 PM (GMT-06:00) To: llvm-dev at lists.llvm.org Subject: [llvm-dev] LLVM Pass for Instructions in Function (error Hi, Please find the embedded code. Also you may follow
2014 Jun 03
1
cuda-memcheck to debug CUDA-enabled R packages
I'm building a simple R extension around a CUDA-enabled dynamic library, and I want to run the whole package with cuda-memcheck for debugging purposes. I can run it just fine with Valgrind: $ R --no-save -d valgrind < test.R However, if I try the same thing with cuda-memcheck, $ R --no-save -d cuda-memcheck < test.R I get: *** Further command line arguments ('--no-save ')
2015 Apr 08
2
[LLVMdev] CUDA front-end (CUDA to LLVM IR)
On Wed, Apr 8, 2015 at 10:12 AM, Dmitry Mikushin <dmitry at kernelgen.org> wrote: > A tool of this kind here: https://github.com/apc-llc/nvcc-llvm-ir > > 2015-04-08 19:01 GMT+02:00 Ahmed ElTantawy <ahmede at ece.ubc.ca>: > >> Hi, >> >> I wanted to ask whether there is ongoing effort (or an already >> established tool) that enables to convert CUDA
2015 Apr 08
5
[LLVMdev] CUDA front-end (CUDA to LLVM IR)
Hi, I wanted to ask whether there is ongoing effort (or an already established tool) that enables to convert CUDA kernels (that uses CUDA specific intrinsics, e.g., threadId.x, __syncthreads(), ...) to LLVM IR. I am aware that I can do this for OpenCL with the help of libclc but I can not find something similar for CUDA. Thanks -------------- next part -------------- An HTML attachment was
2017 Jun 14
4
[CUDA] Lost debug information when compiling CUDA code
Hi, I needed to debug some CUDA code in my project; however, although I used -g when compiling the source code, no source-level information is available in cuda-gdb or cuda-memcheck. Specifically, below is what I did: 1) For a CUDA file a.cu, generate IR files: clang++ -g -emit-llvm --cuda-gpu-arch=sm_35 -c a.cu; 2) Instrument the device code a-cuda-nvptx64-nvidia-cuda-sm_35.bc (generated
2017 Oct 06
0
CUDA tools?
On Thu, 2017-10-05 at 17:07 -0400, m.roth at 5-cent.us wrote: > vychytraly . wrote: > > On Thu, Oct 5, 2017 at 9:51 PM, <m.roth at 5-cent.us> wrote: > > > > > > So, kmod-nvidia installed. Trouble is, I have no tool to test it. And my > > > user might need nvcc, which, of course, is only provided by the NVidia > > > CUDA, which won't install,
2020 Nov 19
0
JIT compiling CUDA source code
I have made a bit of progress... When compiling CUDA source code in memory, the Compilation instance returned by Driver::BuildCompilation() contains two clang Commands: one for the host and one for the CUDA device. I can execute both commands using EmitLLVMOnlyActions. I add the Module from the host compilation to my JIT as usual, but... what to do with the Module from the device compilation? If I
2020 Jul 30
2
Status of CUDA 11 support
Hi, I work in a large CUDA codebase and use Clang to build some of our CUDA code to improve compilation speed. We're planning to upgrade to CUDA 11 soon, and it appears that CUDA 11 is not yet supported in LLVM. >From the LLVM commits history, I can see that work on CUDA 11 has started. Is this currently being worked on? What is the remaining work left? And is any help needed to finish
2018 Nov 30
2
Debug info for CUDA code
Hi all, I found this http://lists.llvm.org/pipermail/llvm-dev/2017-November/118871.html when googling about compiling CUDA code using llvm. Is it still the case that one can't step into CUDA kernel code compiled by llvm in cuda-gdb? I'm using clang 7.0. Thanks, Char -------------- next part -------------- An HTML attachment was scrubbed... URL:
2009 Feb 28
2
Xen and CUDA Virtualization
Hello, I am new to Xen and I have class project to develop in Virtualization course. One of these projects in about nVidia CUDA Virtualization. As far as I understand from the architecuture of Xen, for virtualizing CUDA on Xen, I will have to prepare a back-end (for Dom 0) and front-end (for Dom U) drivers for CUDA. Is this right? Also, from where can we get the source code of latest nVidia
2016 Oct 27
0
problem on compiling cuda program with clang++
Hi, it looks like you're compiling CUDA for an ARM host? This is not a configuration we have tested, nor is it something we have the capability of testing at the moment. You may be able to make it work by providing the appropriate -isystem flags to clang so that it can find your headers, but who knows, it may be more complicated than that. Regards, -Justin On Wed, Oct 26, 2016 at 9:59 PM,
2017 Oct 05
2
CUDA tools?
Hi, again. So, kmod-nvidia installed. Trouble is, I have no tool to test it. And my user might need nvcc, which, of course, is only provided by the NVidia CUDA, which won't install, because it conflicts with kmod-nvidia. Has *anyone* dealt with this? If so, what was your solution? mark
2011 Aug 15
0
[LLVMdev] Cuda programs on LLVM
Hi Adarsh, to my knowledge there is no publicly available CUDA-Frontend for LLVM yet. The work of Helge Rhodin you mentioned is on the backend-side: It allows to generate PTX code from LLVM IR. It is still being maintained, although I think the currently available source code is a little outdated. There is also a PTX backend in the current version of LLVM that makes use of LLVM's
2012 Sep 13
1
[LLVMdev] Clang support for CUDA
Hi: Does Clang support CUDA? I am looking for a front end for my compiler that can take CUDA programming framework. Thanks, -- *Abid ****************************************************** "I have learned silence from the talkative, toleration from the intolerant, and kindness from the unkind"---Gibran "Success is not for the chosen few, but for the few who choose" --- John
2017 Oct 05
4
CUDA tools?
vychytraly . wrote: > On Thu, Oct 5, 2017 at 9:51 PM, <m.roth at 5-cent.us> wrote: >> >> So, kmod-nvidia installed. Trouble is, I have no tool to test it. And my >> user might need nvcc, which, of course, is only provided by the NVidia >> CUDA, which won't install, because it conflicts with kmod-nvidia. >> >> Has *anyone* dealt with this? If so,