ok i have managed to compile using lli (jit) as follows: but i dont get
assembly file?
my sum-main.c file is:
#include <stdio.h>
#include<stdlib.h>
int sum(int a, int b) {
return a + b;
}
int main(int argc, char** argv) {
printf("sum: %d\n", sum(atoi(argv[1]), atoi(argv[2])) +
sum(atoi(argv[1]),
atoi(argv[2])));
return 0;
}
and i used the following steps to compile.
clang -S -emit-llvm sum-main.c -o sum-main.ll
lli sum-main.ll 5 2
it gives 7. i believe that it is compiling at run time.
now how to obtain assembly file generated at run time. and do a detailed
debugging of this code.
please help.
On Wed, Aug 16, 2017 at 10:49 PM, hameeza ahmed <hahmed2305 at gmail.com>
wrote:
> Hello,
>
> Can someone point me to some good tutorials for JIT in LLVM.
>
> My understanding of JIT is it is like llc; it takes optimized bit code
> (generated via opt) and compiles it at run time instead of static
> compilation. for example i have a loop which adds 2 user input numbers. so
> my cpp file becomes:
>
> int main()
> {
> num1[1000], num2[1000];
> //take input in these arrays at run time using scanf.
>
> scanf("%d",num1);
> scanf("%d",num2);
>
> for (int i=0;i<1000;i++);
> {
> num1+num2;
> }
>
> so if i pass this through opt for auto vectorization. i keep vector
> width=32, i get optimized IR.
> then instead of passing the IR through llc i pass it through jit (lli).
> now i get run time compilation. at run time my vectorized IR code
> <32xi32>emits vector assembly something like avx/ simd instructions.
>
> Am i right? please guide me.
>
> Thank You
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.llvm.org/pipermail/llvm-dev/attachments/20170817/d234b288/attachment.html>
On 16 August 2017 at 12:05, hameeza ahmed via llvm-dev <llvm-dev at lists.llvm.org> wrote:> ok i have managed to compile using lli (jit) as follows: but i dont get > assembly file?lli doesn't produce assembly files, that's llc's job. lli creates an object file in memory and runs a function from it.> now how to obtain assembly file generated at run time. and do a detailed > debugging of this code.To debug it I'd put a breakpoint where lli.cpp actually starts executing the code (looks like a call to "runFunctionAsMain") and keep going from there. Cheers. Tim.
Adding llvm-dev back... On 16 August 2017 at 13:50, hameeza ahmed <hahmed2305 at gmail.com> wrote:> Isnt there any way to check the assembly. Like if my opt does loop > vectorization, then backend should emit avx instructions. So those > instructions generation / mapping procedure is same as llc for lli as well?There's a -debug option that'll print out the instructions in some form. But printing its output isn't what lli is designed for, that's llc's job.> Means does lli also use same x86registerinfo.td, x86 instructioninfo.td, and > x86 isellowering.cpp files for code emission? Also does it follow the same > complete backend chain containing instruction selection, register > allocation, instruction scheduling and code emission respectively?Yes to both of those. Cheers. Tim.
Hi Harmeeza> On 16 August 2017 at 12:05, hameeza ahmed via llvm-dev >> now how to obtain assembly file generated at run time. and do a detailed >> debugging of this code.Debugging JITed code can be hairy. I think atm it's only supported for ELF object files on Linux. If you really want to do that, you could follow this guide: https://llvm.org/docs/DebuggingJITedCode.html It should work if you pass -jit-kind=mcjit (the guide mentions the old flag -use-mcjit) to use the "old" monolithic MCJIT. For an ORC-based JIT you need to implement the GDB JIT Interface yourself. I have an example for this on GitHub: https://github.com/weliveindetail/JitFromScratch/tree/jit-debug/gdb-interface There is also another example that shows how to implement your own optimization chain in a ORC-Jit and dump raw vs. optimized IR code: https://github.com/weliveindetail/JitFromScratch/tree/jit-optimization Cheers Stefan -- https://weliveindetail.github.io/blog/ https://cryptup.org/pub/stefan.graenitz at gmail.com