I guess this is slightly offtopic, but the post about the JIT and garbage collection made me wonder why LLVM supports JIT compilation at all. It has much smaller scope for optimisation due to the speed requirements, takes more memory and causes the same work to be repeated over and over for each execution. What reason is there for anything to use JIT compilation over ahead-of-time compiling to native code? thanks -mike
Mike Hearn <mike at plan99.net> writes:> I guess this is slightly offtopic, but the post about the JIT and garbage > collection made me wonder why LLVM supports JIT compilation at all. It has > much smaller scope for optimisation due to the speed requirements, takes > more memory and causes the same work to be repeated over and over for each > execution. > > What reason is there for anything to use JIT compilation over > ahead-of-time compiling to native code?Some applications grows and/or changes at runtime. For instance, my programming language has macro facilities similar to those on Lisp, which means that you can use the full language at compile time. This is used on a database server this way: the application starts being compiled and the environment is inspected (database schema, etc). This affects the code generated by several macros. Finally, the macroexpanded source code is compiled into binary code. The final result is a binary application customized for the database it serves. The database application serves code to clients, also customized depending on the database and on the client. The "static" part of a client is just the compiler plus some bootstrap code that asks for source code to the server. The client application can be modified at runtime by the server. The server application can be modified at runtime due to changes on the database schema or on the business rules. Finally, complex client requests are compiled to native code by the server. -- Oscar
Mike Hearn <mike at plan99.net> writes:> I guess this is slightly offtopic, but the post about the JIT and garbage > collection made me wonder why LLVM supports JIT compilation at all. It has > much smaller scope for optimisation due to the speed requirements, takes > more memory and causes the same work to be repeated over and over for each > execution. > > What reason is there for anything to use JIT compilation over > ahead-of-time compiling to native code?Some applications grows and/or changes at runtime. For instance, my programming language has macro facilities similar to those on Lisp, which means that you can use the full language at compile time. This is used on a database server this way: the application starts being compiled and the environment is inspected (database schema, etc). This affects the code generated by several macros. Finally, the macroexpanded source code is compiled into binary code. The final result is a binary application customized for the database it serves. The database application serves code to clients, also customized depending on the database and on the client. The "static" part of a client is just the compiler plus some bootstrap code that asks for source code to the server. The client application can be modified at runtime by the server. The server application can be modified at runtime due to changes on the database schema or on the business rules. Finally, complex client requests are compiled to native code by the server. -- Oscar
On Mon, 2006-08-07 at 18:17 +0100, Mike Hearn wrote:> I guess this is slightly offtopic, but the post about the JIT and garbage > collection made me wonder why LLVM supports JIT compilation at all. It has > much smaller scope for optimisation due to the speed requirements, takes > more memory and causes the same work to be repeated over and over for each > execution. > > What reason is there for anything to use JIT compilation over > ahead-of-time compiling to native code?A lot of optimizations are based on runtime behavior. Your choices there are either extensive profiling or runtime profiling with JITing. As an example, let's say you have a indirect function call. Most of the time this function goes to one target. With a JIT, you can expand heavily biased calls to test the function pointer (as a validation) and do a direct call. Then you may be able to inline or IP-constant propagate the call site. The target of that call site might be input data based, in which case you cannot statically do these optimizations profitiably, and even profiling may blur the fact that the call is mostly static (since different input data may generate different call targets). Thus you must do runtime profiling and JITing. Granted, LLVM currently doesn't really do much of this, but the infastructure is there to do so. Andrew
On Mon, 07 Aug 2006 13:56:48 -0500, Andrew Lenharth wrote:> Granted, LLVM currently doesn't really do much of this, but the > infastructure is there to do so.Right, but you can get this by doing profile directed optimisation during development so end users/production systems don't have the overhead. Also it seems to me that at some point the bookkeeping and analysis overhead for these optimisations would reduce performance of the program by more than they improve it. Eg keeping track of which calls are candidates for inlining may increase memory usage, so increasing swapping or decreasing cache utilisation, so losing you more performance than it gains. thanks -mike