Displaying 20 results from an estimated 400 matches similar to: "OrcJIT + CUDA Prototype for Cling"
2017 Nov 14
1
OrcJIT + CUDA Prototype for Cling
Hi Lang,
thank You very much. I've used Your code and the creating of the object
file works. I think the problem is after creating the object file. When
I link the object file with ld I get an executable, which is working right.
After changing the clang and llvm libraries from the package control
version (.deb) to a own compiled version with debug options, I get an
assert() fault.
In
void
2017 Jun 14
4
[CUDA] Lost debug information when compiling CUDA code
Hi,
I needed to debug some CUDA code in my project; however, although I used -g when compiling the source code, no source-level information is available in cuda-gdb or cuda-memcheck.
Specifically, below is what I did:
1) For a CUDA file a.cu, generate IR files: clang++ -g -emit-llvm --cuda-gpu-arch=sm_35 -c a.cu;
2) Instrument the device code a-cuda-nvptx64-nvidia-cuda-sm_35.bc (generated
2016 Mar 05
2
instrumenting device code with gpucc
On Fri, Mar 4, 2016 at 5:50 PM, Yuanfeng Peng <yuanfeng.jack.peng at gmail.com>
wrote:
> Hi Jingyue,
>
> My name is Yuanfeng Peng, I'm a PhD student at UPenn. I'm sorry to bother
> you, but I'm having trouble with gpucc in my project, and I would be really
> grateful for your help!
>
> Currently we're trying to instrument CUDA code using LLVM 3.9, and
2017 Jun 09
1
NVPTX Back-end: relocatable device code support for dynamic parallelism
Hi everyone,
CUDA allows to call some runtime functions also from the device code. On
a multi-GPU system this allows the GPU to determine its device id on its
own via cudaGetDevice().
Unfortunately i cannot get it working when compiling with clang. When
compiling with nvcc relocatable device code needs to be set to true
(-rdc=true) and the cudadevrt is needed when linking [0]. I did not
2016 Mar 15
2
instrumenting device code with gpucc
Hi Jingyue,
Sorry to ask again, but how exactly could I glue the fatbin with the
instrumented host code? Or does it mean we actually cannot instrument both
the host & device code at the same time?
Thanks!
yuanfeng
On Tue, Mar 15, 2016 at 10:09 AM, Jingyue Wu <jingyue at google.com> wrote:
> Including fatbin into host code should be done in frontend.
>
> On Mon, Mar 14, 2016
2016 Mar 13
2
instrumenting device code with gpucc
Hey Jingyue,
Thanks for being so responsive! I finally figured out a way to resolve the
issue: all I have to do is to use `-only-needed` when merging the device
bitcodes with llvm-link.
However, since we actually need to instrument the host code as well, I
encountered another issue when I tried to glue the instrumented host code
and fatbin together. When I only instrumented the device code, I
2016 Mar 10
4
instrumenting device code with gpucc
It's hard to tell what is wrong without a concrete example. E.g., what is
the program you are instrumenting? What is the definition of the hook
function? How did you link that definition with the binary?
One thing suspicious to me is that you may have linked the definition of
_Cool_MemRead_Hook as a host function instead of a device function. AFAIK,
PTX assembly cannot be linked. So, if you
2020 Nov 17
2
JIT compiling CUDA source code
We have an application that allows the user to compile and execute C++ code
on the fly, using Orc JIT v2, via the LLJIT class. And we would like to
extend it to allow the user to provide CUDA source code as well, for GPU
programming. But I am having a hard time figuring out how to do it.
To JIT compile C++ code, we do basically as follows:
1. call Driver::BuildCompilation(), which returns a
2017 Aug 24
1
Invalid Signature of orc::RTDyldObjectLinkingLayer::NotifyLoadedFtor
Hi all, hi Lang
It's a little late to report issues for release_50, but I just found
that thing while porting my JitFromScratch examples to 5.0.
This is a really nifty detail, but (if I'm not mistaken) the function
signature of RTDyldObjectLinkingLayer::NotifyLoadedFtor is incorrect:
$ grep -h -r -A 1 "using NotifyLoadedFtor"
2017 Apr 24
1
[FFI] [OrcJIT] Status update on C FFI for OrcJIT?
I looked around for the status of OrcJIT FFI support. The last e-mail
thread I could find was this one: Link
<http://lists.llvm.org/pipermail/llvm-dev/2015-February/081679.html>
Raw: http://lists.llvm.org/pipermail/llvm-dev/2015-February/081679.html
Is OrcJIT now considered stable enough that there can be "official" exposed
C APIs?
If not, what's the standard approach if I
2015 Feb 01
3
[LLVMdev] OrcJIT in LLVM C bindings
Hello,
I was wondering if there is someone already working on putting the new
OrcJIT APIs in the LLVM-C bindings?
Also, is there a general consensus to also add C bindings when new major
features are added?
Hayden
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150201/061f5949/attachment.html>
2020 Jul 10
0
[cfe-dev] [RFC] Moving (parts of) the Cling REPL in Clang
I do not know enough about cling, but I like what you describe very much, am particularly intrigued about how your approach could also be appropriated to do ahead-of-time constexpr metaprogramming as well, which also involves incrementally adding declarations to the translation unit.
Dave
> On Jul 9, 2020, at 11:43 PM, JF Bastien via cfe-dev <cfe-dev at lists.llvm.org> wrote:
>
>
2015 Feb 03
2
[LLVMdev] OrcJIT in LLVM C bindings
Thanks, David.
I'd be happy to add the bindings .. is there a general way we add them? Or
do you just scrub the API and make sensible judgements to the API?
On Sun, Feb 1, 2015 at 1:55 PM, David Blaikie <dblaikie at gmail.com> wrote:
>
>
> On Sun, Feb 1, 2015 at 10:58 AM, Hayden Livingston <halivingston at gmail.com
> > wrote:
>
>> Hello,
>>
>> I
2018 Jul 01
2
I've seen OrcJit is under overhaul, and also the MCJIT, so what's the plan?
I didn't seen any roadmap and plan about OrcJit & MCJIT.
And would OrcJIT be stablize in version 7.0? Or latter version?
Would MCJIT be removed in source tree, when?
--
此致
礼
罗勇刚
Yours
sincerely,
Yonggang Luo
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2019 Apr 10
4
Feasibility of cling/llvm interpreter for JIT replacement
Dear Sir/Madam
Our company, 4Js software, has developed an SQL data base software that
runs under different operating systems: Windows, Linux, Mac OS X. This
software compiles each SQL statement into a C program that is compiled
"on the fly" and executed by our JIT, Just In Time compiler.
We wanted to port it to Apple's iOS, and spent a lot of time
retargetting the JIT for
2019 May 18
3
Bugzilla OrcJIT Tickets
Hello everyone
A previous thread about OrcJIT brought up bug reports on Bugzilla. A
quick search gives 20+ results:
https://bugs.llvm.org/buglist.cgi?component=OrcJIT&list_id=162232&query_format=advanced&resolution=---
While some of them are obviously outdated (addModuleSet API cleanup
[1]), others may actually be relevant again (Small code model? [2]). If
you reported one of them,
2019 Feb 25
2
LLVM C API OrcJIT
Hello, I've been trying to use LLVM's Orc JIT from C API for a few days and
i can't get it to work, I'm kind of annoyed for the lack of documentation
examples for the C API.
Here's my code: https://hasteb.in/ohexiweb.cpp
I compile it using
clang `llvm-config --cflags --ldflags --libs all` main.c -o main -g
-rdynamic
And it ends up segfaulting at the line where it calls
2019 Aug 16
2
[ORC] [mlir] Dump assembly from OrcJit
+ MLIR dev mailing list since that’s where the OrcJit I’m using is.
Thanks for all the details, Lang! What you described is exactly what I’m looking for!
Please, MLIR dev, let me know if this debug feature and the solution that Lang describes below is interesting for MLIR. I’ll dig more into the details then but it doesn’t seem too complicated.
Thanks,
Diego
From: Lang Hames [mailto:lhames at
2015 Mar 17
3
[LLVMdev] How will OrcJIT guarantee thread-safety when a function is asked to be re generated?
I've been playing with OrcJIT a bit, and from the looks of it I can (like
in the previous JIT I suppose?) ask for a function to be re generated.
If I've given the address of the function that LLVM gave me to an external
party, do "I" need to ensure thread-safety?
Or is it safe to ask OrcJIT to re generate code at that address and
everything will work magically?
I'm
2020 Nov 19
1
JIT compiling CUDA source code
Sound right now like you are emitting an LLVM module?
The best strategy is probably to use to emit a PTX module and then pass
that to the CUDA driver. This is what we do on the Julia side in CUDA.jl.
Nvidia has a somewhat helpful tutorial on this at
https://github.com/NVIDIA/cuda-samples/blob/c4e2869a2becb4b6d9ce5f64914406bf5e239662/Samples/vectorAdd_nvrtc/vectorAdd.cpp
and