similar to: [LLVMdev] Reminder: Please switch to MCJIT, as the old JIT will be removed soon.

Displaying 20 results from an estimated 10000 matches similar to: "[LLVMdev] Reminder: Please switch to MCJIT, as the old JIT will be removed soon."

2014 Jul 29
2
[LLVMdev] Reminder: Please switch to MCJIT, as the old JIT will be removed soon.
Hi Keno, Could you give a short high-level overview of the way Julia works now with MCJIT instead the JIT: What I gather so far... * Compiled IR functions are emitted to a shadow module. * Any used function is cloned into its own new module and the module is added to MCJIT. * Called functions or globalvars are only declared in that module. * Modules are never removed meaning "old"
2013 Dec 10
2
[LLVMdev] [RFC] MCJIT usage models
Hi Andy, My use case is quite similar to what Keno described. I am using clang + JIT to dynamically compile C++ functions generated in response to user interaction. Generated functions may be unloaded or modified. I would like to break down the old JIT code into three major parts. 1) The old JIT has its own code emitter, which duplicates code from lib/MC and does not generate debug info and
2013 Dec 10
0
[LLVMdev] [RFC] MCJIT usage models
With Julia, we're obviously very much in the first use case. As you know, we pretty much have a working version of Julia on top of MCJIT, but there's still a few kinks to work out, which I'll talk about in a separate email. One think which I remember you asking at the BOF is what MCJIT currently can't do well that the old JIT did, so I'd like to offer up an example. With the
2014 Jun 02
2
[LLVMdev] [lldb-dev] MCJIT Mach-O JIT debugging
I didn't get to work on this more last week, but I'll look at incorporating that suggestion. The other question of course is how to do this in LLDB. Right, now what I'm doing is going through and adjusting the load address of every leaf in the section tree. That basically works and gets me backtraces with the correct function names and the ability to set breakpoints at functions in
2016 Feb 05
2
MCJit Runtine Performance
Hi Keno, I am talking about runtime. The performance of the generated machine code. Not the time it takes to lower the IR to machine code. We typically only JIT once (taking a few secs) and then run the generated machine code for hours. So the JIT time (IR -> machine code) doesn't impact us. Cheers Morten On 05/02/16 15:58, Keno Fischer wrote: > Actually, reading over all of this
2014 Jun 02
2
[LLVMdev] [lldb-dev] MCJIT Mach-O JIT debugging
We don't currently apply any relocations (that I know of) for debug info in LLDB. > On Jun 2, 2014, at 12:35 PM, Keno Fischer <kfischer at college.harvard.edu> wrote: > > I think I'm getting closer. The debug_info section is being relocated correctly (I think): > > 0x00000000: Compile Unit: length = 0x00000045 version = 0x0003 abbr_offset = 0x00000000 addr_size =
2016 Feb 05
6
MCJit Runtine Performance
----- Original Message ----- > From: "Keno Fischer via llvm-dev" <llvm-dev at lists.llvm.org> > To: "Morten Brodersen" <Morten.Brodersen at constrainttec.com> > Cc: "llvm-dev" <llvm-dev at lists.llvm.org> > Sent: Thursday, February 4, 2016 6:05:29 PM > Subject: Re: [llvm-dev] MCJit Runtine Performance > > > > Yes,
2016 Feb 05
3
MCJit Runtine Performance
Hi All, We recently upgraded a number of applications from LLVM 3.5.2 (old JIT) to LLVM 3.7.1 (MCJit). We made the minimum changes needed for the switch (no changes to the IR generated or the IR optimizations applied). The resulting code pass all tests (8000+). However the runtime performance dropped significantly: 30% to 40% for all applications. The applications I am talking about
2014 Feb 01
2
[LLVMdev] Weird msan problem
I have verified that both TLS implementations indeed find the same area of memory. Anything else I could look for? On Tue, Jan 28, 2014 at 4:28 PM, Keno Fischer <kfischer at college.harvard.edu>wrote: > Yes, both JIT code and the native runtime are instrumented. I am under the > impressions that the the C library should guarantee that from the way the > relocations are
2016 Feb 05
2
MCJit Runtine Performance
Hi Keno, Thanks for the fast ISel suggestion. Here are the results (for a small but representational run): LLVM 3.5.2 (old JIT): 4m44s LLVM 3.7.1 (MCJit) no fast ISel: 7m31s LLVM 3.7.1 (MCJit) fast ISel: 7m39s So not much of a difference unfortunately. On 05/02/16 11:05, Keno Fischer wrote: > Yes, unfortunately, this is very much known. Over in the julia > project, we've recently
2014 Feb 02
2
[LLVMdev] Weird msan problem
How is ccall() implemented? If it manually sets up a stack frame, then it also needs to store argument shadow values in paramtls. I don't think there is an overflow, unless you have a _lot_ of arguments in a function call. On Sun, Feb 2, 2014 at 9:26 AM, Keno Fischer <kfischer at college.harvard.edu> wrote: > Also, I was looking at the instrumented LLVM code and I noticed that the
2016 Feb 05
2
MCJit Runtine Performance
On 4 February 2016 at 22:48, Morten Brodersen via llvm-dev <llvm-dev at lists.llvm.org> wrote: > Hi Rafael, > > Not easily (llc). > > Is there a way to make MCJit not use the large code model when JIT'ing? > I think Davide started adding support for the small code model. Cheers, Rafael
2016 Feb 05
4
MCJit Runtine Performance
Hi Morten, Something else just occurred to me: can you share your EngineBuilder configuration lines? (http://llvm.org/docs/doxygen/html/classllvm_1_1EngineBuilder.html) In particular - are you explicitly setting the optimization level? The old JIT may have had a different default. - Lang. Sent from my iPad > On Feb 4, 2016, at 10:54 PM, Jim Grosbach via llvm-dev <llvm-dev at
2013 Dec 09
8
[LLVMdev] [RFC] MCJIT usage models
Below is an outline of various usage models for MCJIT that I put together based on conversations at last month's LLVM Developer Meeting. If you're using or thinking about using MCJIT and your use case doesn't seem to fit in one of the categories below then either I didn't talk to you or I didn't understand what you're doing. In any case, I'd like to see this get
2014 Feb 07
2
[LLVMdev] Weird msan problem
Yes, it would be great to get that fixed. On Wed, Feb 5, 2014 at 4:09 PM, Evgeniy Stepanov <eugeni.stepanov at gmail.com>wrote: > On Thu, Feb 6, 2014 at 12:21 AM, Keno Fischer > <kfischer at college.harvard.edu> wrote: > > Looks like when you materialize the stores, you should check the size of > the > > the store and emit an appropriate amount of stores to the
2014 Jan 28
2
[LLVMdev] Weird msan problem
I assume there are transitions between JITted code and native helper functions. How are you handling them? Are native functions MSan-instrumented? MSan is passing shadow across function calls in TLS slots. Does your TLS implementation guarantee that accesses to __msan_param_tls from JITted and from native code map to the same memory? On Mon, Jan 27, 2014 at 11:36 PM, Evgeniy Stepanov
2014 Feb 03
2
[LLVMdev] Weird msan problem
The code for ccall looks right. Sounds like you have a very small range of instructions where an uninitialized value appear. You could try debugging at asm level. Shadow for b should be passed at offset 0 in __msan_param_tls. MSan could propagate shadow through arithmetic and even some logic operations (like select). It could be that b is clean on function entry, but then something uninitialized
2016 Feb 05
2
MCJit Runtine Performance
Hi Lang, > That suggests an optimization quality issue, rather than compile-time overhead Yes that makes sense. The long running applications (6+ hours) JIT the rules once (taking a few seconds) and then run the generated machine code for hours. With no additional JIT'ing. > if we can configure the CodeGen pipeline properly we can get the performance back to the same level as
2014 Feb 05
2
[LLVMdev] Weird msan problem
Looks like when you materialize the stores, you should check the size of the the store and emit an appropriate amount of stores to the origin shadow (or just a memset intrinsic?). On Wed, Feb 5, 2014 at 2:13 PM, Keno Fischer <kfischer at college.harvard.edu>wrote: > The @entry stuff is just a gdb artifact. I've been tracking this back a > little further, and it seems there's
2013 Jun 04
0
[LLVMdev] MCJIT and Kaleidoscope Tutorial
Hi Dmitri, You might want to try replacing the call to JMM->invalidInstructionCache() with a call to TheExecutionEngine->finalizeObject(). If you are getting a non-NULL pointer from getPointerToFunction but it crashes when you try to call it, that is most likely because the memory for the generated code has not been marked as executable. That happens inside finalizeObject, which also