Hi all, As I understand it, LLVM's JIT memory manager works by allocating a 16Mb block of memory and generating native code into it. Once that block is exhausted no more functions can be JIT compiled. I'm trying to figure out ways to work around this limitation. One idea I had was to use that 16Mb block as a scratch area for generating code. Once a method has been compiled (and therefore its size is known) a new block of memory would be allocated and the native code copied into it. The original copy could then be freed, leaving the whole 16Mb block free for the next compilation. I realize this approach is not generically suitable for LLVM, but it might be possible for the application I'm working on because a) a single thread manages all compilations sequentially, and b) the compiled methods don't make direct calls to other methods (so stubs will never be generated). What I need to know is, is the native code generated by LLVM relocatable such that it could be copied to another location in memory like this? Also, does LLVM do anything to the native code after it is first emitted? Does it try rewriting it, for example? Cheers, Gary -- http://gbenson.net/
On Thu, Aug 20, 2009 at 6:42 AM, Gary Benson<gbenson at redhat.com> wrote:> Hi all, > > As I understand it, LLVM's JIT memory manager works by allocating a > 16Mb block of memory and generating native code into it. Once that > block is exhausted no more functions can be JIT compiled. I'm trying > to figure out ways to work around this limitation.Nope, I actually fixed this bug in r76902. :) Now it allocates memory in 64K slabs, and when it runs out of space it allocates another 64K slab.> One idea I had was to use that 16Mb block as a scratch area for > generating code. Once a method has been compiled (and therefore its > size is known) a new block of memory would be allocated and the native > code copied into it. The original copy could then be freed, leaving > the whole 16Mb block free for the next compilation. I realize this > approach is not generically suitable for LLVM, but it might be > possible for the application I'm working on because a) a single thread > manages all compilations sequentially, and b) the compiled methods > don't make direct calls to other methods (so stubs will never be > generated). > > What I need to know is, is the native code generated by LLVM > relocatable such that it could be copied to another location in memory > like this? Also, does LLVM do anything to the native code after it is > first emitted? Does it try rewriting it, for example?That's a good question, and I wish I knew the answer, because if that were the case the entire memory manager could be dramatically simplified to emit code into a resizeable buffer (like a BinaryObject or std::vector<uint8_t>), copy the code into place (wherever the memory manager wants it to go), and then apply the relocations. I suspect that the answer is that the code is not relocatable, or else why would the API be designed this way? OTOH, it may be that the code was not relocatable when the memory manager was designed. Reid
On Aug 20, 2009, at 10:02 AM, Reid Kleckner wrote:>> What I need to know is, is the native code generated by LLVM >> relocatable such that it could be copied to another location in >> memory >> like this? Also, does LLVM do anything to the native code after it >> is >> first emitted? Does it try rewriting it, for example? > > That's a good question, and I wish I knew the answer, because if that > were the case the entire memory manager could be dramatically > simplified to emit code into a resizeable buffer (like a BinaryObject > or std::vector<uint8_t>), copy the code into place (wherever the > memory manager wants it to go), and then apply the relocations.I would really like to get to this point, but the JIT just isn't designed that way. Logically, the output of the backend is machine code bytes + relocations. In practice, we don't have this clean separation right now. I'm hoping that the MC stuff we're doing will help get us in this direction. -Chris
Reid Kleckner wrote:> On Thu, Aug 20, 2009 at 6:42 AM, Gary Benson<gbenson at redhat.com> wrote: > > As I understand it, LLVM's JIT memory manager works by allocating > > a 16Mb block of memory and generating native code into it. Once > > that block is exhausted no more functions can be JIT compiled. > > I'm trying to figure out ways to work around this limitation. > > Nope, I actually fixed this bug in r76902. :) Now it allocates > memory in 64K slabs, and when it runs out of space it allocates > another 64K slab.Oh, that's excellent news! I can forget about this then ;) Cheers, Gary -- http://gbenson.net/
Seemingly Similar Threads
- [LLVMdev] Relocatability of LLVM code
- [LLVMdev] Implementing llvm.memory.barrier on PowerPC
- [LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
- [LLVMdev] Implementing llvm.memory.barrier on PowerPC
- [LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC