Displaying 20 results from an estimated 3000 matches similar to: "instrumenting device code with gpucc"
2016 Mar 10
4
instrumenting device code with gpucc
It's hard to tell what is wrong without a concrete example. E.g., what is
the program you are instrumenting? What is the definition of the hook
function? How did you link that definition with the binary?
One thing suspicious to me is that you may have linked the definition of
_Cool_MemRead_Hook as a host function instead of a device function. AFAIK,
PTX assembly cannot be linked. So, if you
2016 Mar 15
2
instrumenting device code with gpucc
Hi Jingyue,
Sorry to ask again, but how exactly could I glue the fatbin with the
instrumented host code? Or does it mean we actually cannot instrument both
the host & device code at the same time?
Thanks!
yuanfeng
On Tue, Mar 15, 2016 at 10:09 AM, Jingyue Wu <jingyue at google.com> wrote:
> Including fatbin into host code should be done in frontend.
>
> On Mon, Mar 14, 2016
2016 Mar 13
2
instrumenting device code with gpucc
Hey Jingyue,
Thanks for being so responsive! I finally figured out a way to resolve the
issue: all I have to do is to use `-only-needed` when merging the device
bitcodes with llvm-link.
However, since we actually need to instrument the host code as well, I
encountered another issue when I tried to glue the instrumented host code
and fatbin together. When I only instrumented the device code, I
2016 Mar 12
2
instrumenting device code with gpucc
Hey Jingyue,
Though I tried `opt -nvvm-reflect` on both bc files, the nvvm reflect
anchor didn't go away; ptxas is still complaining about the duplicate
definition of of function '_ZL21__nvvm_reflect_anchorv' . Did I misused
the nvvm-reflect pass?
Thanks!
yuanfeng
On Fri, Mar 11, 2016 at 10:10 AM, Jingyue Wu <jingyue at google.com> wrote:
> According to the examples you
2016 Aug 01
3
[GPUCC] link against libdevice
OK, I see the problem. You were right that we weren't picking up libdevice.
CUDA 7.0 only ships with the following libdevice binaries (found
/path/to/cuda/nvvm/libdevice):
libdevice.compute_20.10.bc libdevice.compute_30.10.bc
libdevice.compute_35.10.bc
If you ask for sm_50 with cuda 7.0, clang can't find a matching
libdevice binary, and it will apparently silently give up and try to
2016 Aug 01
0
[GPUCC] link against libdevice
Hi Justin,
Thanks for your response! The clang & llvm I'm using was built from
source.
Below is the output of compiling with -v. Any suggestions would be
appreciated!
*clang version 3.9.0 (trunk 270145) (llvm/trunk 270133)*
*Target: x86_64-unknown-linux-gnu*
*Thread model: posix*
*InstalledDir: /usr/local/bin*
*Found candidate GCC installation: /usr/lib/gcc/x86_64-linux-gnu/4.8*
2016 Aug 01
2
[GPUCC] link against libdevice
Hi, Yuanfeng.
What version of clang are you using? CUDA is only known to work at
tip of head, so you must build clang yourself from source.
I suspect that's your problem, but if building from source doesn't fix
it, please attach the output of compiling with -v.
Regards,
-Justin
On Sun, Jul 31, 2016 at 9:24 PM, Chandler Carruth <chandlerc at google.com> wrote:
> Directly
2019 Feb 26
2
Debug info for CUDA code
Hi Alexey,
Just want to make sure I understand what you said because I'm not familiar with the llvm pipeline, it's this line:
/net/gs/vol3/software/modules-sw/cuda/10.0/Linux/RHEL6/x86_64/bin/ptxas" -m64 -g --dont-merge-basicblocks --return-at-end -v --gpu-name sm_75 --output-file /tmp/60663577.1.login.q/testparticles-4fd988.o /tmp/60663577.1.login.q/testparticles-1d20c4.s
that
2019 Feb 27
3
Debug info for CUDA code
Hi Alexey,
I submitted the bug report to nvidia. While they are working on it, can you share some insight in what could potentially cause this? I just want to get a sense if such a bug require significant amount of work to fix, which can help me make some decision moving forward with my project.
Thanks,
Char
At 2019-02-27 03:19:02, "Alexey Bataev" <a.bataev at outlook.com>
2019 Mar 11
2
Debug info for CUDA code
Hi Alexey,
Is there any option for clang to turn on debug for the host code only but not the device code? I've been using something like -ggdb3 -O0 but this generate debug info for both host and device. I'm trying to work around the aforementioned ptxas bug.
Thanks,
Char
At 2019-02-28 02:09:54, "Alexey Bataev" <a.bataev at outlook.com> wrote:
Hi Char, it looks like
2020 Jan 15
2
Debug info for CUDA code
Hi Alexey,
Almost a year has passed and Nvidia finally fixes the ptxas issue in CUDA 10.2 according to: https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-compiler-resolved-issues However, I can not yet use it with llvm 9.0.0 release because CUDA 10.2 is not supported yet. Is there other branches of the llvm repo that supports CUDA 10.2 now? Or do I need to wait for llvm 10
2019 Feb 26
1
Debug info for CUDA code
Hi Alexey,
Thanks for the great work! The version I checked out works most of the time. But I do encounter crashes sometimes. I can't file a bug report on https://bugs.llvm.org/ because I don't have an account. I sent an email to bugs-admin at lists.llvm.org for an account already but I haven't heard back. Meanwhile, can you take a look at the issue? I'm attaching the bug report
2016 Apr 09
2
[GPUCC] how to remove _ZL21__nvvm_reflect_anchorv() automatically?
David's change makes nvvm_reflect_anchor unnecessary. The issue with dots
in names generated by llvm still needs to be fixed.
On Apr 9, 2016 8:32 AM, "Jingyue Wu" <jingyue at google.com> wrote:
> Artem,
>
> With David's http://reviews.llvm.org/rL265060, do you think
> __nvvm_reflect_anchor is still necessary?
>
> On Fri, Apr 8, 2016 at 9:37 AM, Yuanfeng
2016 Oct 27
3
problem on compiling cuda program with clang++
Hi all,
I compiled the *llvm3.9* source code on the *Nvidia TX1* board. And now I
am following the document in the docs/CompileCudaWithLLVM.rst to compile
cuda program with clang++.
However, when I compile `axpy.cu` using `nvcc`, *nvcc* can generate the
correct the binary;
while compiling `axpy.cu` using clang++, the detailed command is `clang++
axpy.cu -o axpy --cuda-gpu-arch=sm_53
2018 Mar 23
2
cuda cross compiling issue for target aarch64-linux-androideabi
I was wondering if anyone has encountered this issue when cross compiling
cuda on Nvidia TX2 running android.
The error is
In file included from <built-in>:1:
In file included from
prebuilts/clang/host/linux-x86/clang-4667116/lib64/clang/7.0.1/include/__clang_cuda_runtime_wrapper.h:219:
../cuda/targets/aarch64-linux-androideabi/include/math_functions.hpp:3477:19:
error: no matching function
2016 Apr 08
2
[GPUCC] how to remove _ZL21__nvvm_reflect_anchorv() automatically?
Yeah, '.' is the direct reason for the ptxas failure here. I'm curious,
however, about what the purpose of nvvm_reflect_anchorv() is here, and why
does the front-end always generate this function? Since the current PTX
emission doesn't mangle dots, it would be a reasonable workaround for me to
prevent the front-end from generating this function in the first place.
Is there any
2018 Mar 23
0
cuda cross compiling issue for target aarch64-linux-androideabi
+Artem Belevich <tra at google.com>
On Fri, Mar 23, 2018 at 7:53 PM Bharath Bhoopalam via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> I was wondering if anyone has encountered this issue when cross compiling
> cuda on Nvidia TX2 running android.
>
> The error is
> In file included from <built-in>:1:
> In file included from
>
2017 Aug 02
2
CUDA compilation "No available targets are compatible with this triple." problem
Yes, I followed the guide. The same error showed up:
>clang++ axpy.cu -o axpy --cuda-gpu-arch=sm_35 -L/usr/local/cuda/lib64 -I/usr/local/cuda/include -lcudart_static -ldl -lrt -pthread
error: unable to create target: 'No available targets are compatible with this triple.'
________________________________
From: Kevin Choi <code.kchoi at gmail.com>
Sent: Wednesday, August 2,
2017 Aug 02
2
CUDA compilation "No available targets are compatible with this triple." problem
Hi,
I have trouble compiling CUDA code with Clang. The following is a command I tried:
> clang++ axpy.cu -o axpy --cuda-gpu-arch=sm_35 --cuda-path=/usr/local/cuda
The error message is
error: unable to create target: 'No available targets are compatible with this triple.'
The info of the LLVM I'm using is as follows:
> lang++ --version
clang version 6.0.0
2012 Jun 12
2
[LLVMdev] [NVPTX] For linkonce_odr NVPTX generates .weak, but even newest PTXAS can't handle it
Dear LLVM NVPTX maintainers,
Just to have the issue recorded, I don't know how important it is:
clang generates linkonce_odr out of __inline__, and NVPTX generates .weak
out of linkonce_odr (how it happens - a big question, btw, because I can't
find anything related in NVPTX asm printer - does it chain to some other
printer?), and finally ptxas (both 4.2 and 5) fails to compile it to