On 4/25/06, Archie Cobbs <archie at dellroad.org> wrote:> Alkis Evlogimenos wrote: > > On 4/25/06, Archie Cobbs <archie at dellroad.org> wrote: > >> Motivation: Java's "first active use" requirement for class initialization. > >> When invoking a static method, it's possible that a class may need to > >> be initialized, However, when invoking an instance method, that's not > >> possible. > >> > >> Perhaps there should be a way in LLVM to specify predicates (or at least > >> properties of global variables and parameters) that are known to be true > >> at the start of each function... ? > > > > I think this will end up being the same as the null pointer trapping > > instruction optimization. The implementation will very likely involve > > some pointer to the description of the class. To make this fast this > > pointer will be null if the class is not loaded and you trap when you > > try to use it and perform initialization. So in the end the same > > optimization pass that was used for successive field accesses can be > > used for class initialization as well. > > If that were the implementation then yes that could work. But using > a null pointer like this probably wouldn't be the case. In Java you have > to load a class before you initialize it, so the pointer to the type > structure will already be non-null.That is why I said if you want it to be fast :-). My point was that if you want this to be fast you need to find a way to make it trap when a class is not initialized. If you employ the method you mention below for JCVM then you need to perform optimizations to simplify the conditionals.> In JCVM for example, there is a bit in type->flags that determines > whether the class is initialized or not. This bit has to be checked > before every static method invocation or static field access. You could > reserve an entire byte instead of a bit, but I don't know if that would > make it any easier to do this optimization. > > ------ > > I'm not entirely convinced (or understanding) how the "no annotations" > approach is supposed to work. For example, for optimizing away Java's > "active use" checks as discussed above. How specifically does this > optimzation get done? Saying that the implementation will "likely" use > a null pointer is not an answer because, what if the implementation > doesn't use a null pointer? I.e., my question is the more general one: > how do optimizations that are specific to the front-end language get > done? How does the front-end "secret knowledge" get passed through > somehow so it can be used for optimization purposes? > > Apologies for sounding skeptical, I'm just trying to nail down an > answer to a kindof philosophical question. > > ------ > > Another question: does LLVM know about or handle signal frames? What > if code wants to unwind across a signal frame? This is another thing > that would be required for Java if e.g. you wanted to detect null > pointer access via signals. Note setjmp/longjmp works OK across signal > frames. > > Thanks, > -Archie > > __________________________________________________________________________ > Archie Cobbs * CTO, Awarix * http://www.awarix.com > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >-- Alkis
Alkis Evlogimenos wrote:> On 4/25/06, Archie Cobbs <archie at dellroad.org> wrote: >> Alkis Evlogimenos wrote: >>> On 4/25/06, Archie Cobbs <archie at dellroad.org> wrote: >>>> Motivation: Java's "first active use" requirement for class initialization. >>>> When invoking a static method, it's possible that a class may need to >>>> be initialized, However, when invoking an instance method, that's not >>>> possible. >>>> >>>> Perhaps there should be a way in LLVM to specify predicates (or at least >>>> properties of global variables and parameters) that are known to be true >>>> at the start of each function... ? >>> I think this will end up being the same as the null pointer trapping >>> instruction optimization. The implementation will very likely involve >>> some pointer to the description of the class. To make this fast this >>> pointer will be null if the class is not loaded and you trap when you >>> try to use it and perform initialization. So in the end the same >>> optimization pass that was used for successive field accesses can be >>> used for class initialization as well. >> If that were the implementation then yes that could work. But using >> a null pointer like this probably wouldn't be the case. In Java you have >> to load a class before you initialize it, so the pointer to the type >> structure will already be non-null. > > That is why I said if you want it to be fast :-). My point was that if > you want this to be fast you need to find a way to make it trap when a > class is not initialized. If you employ the method you mention below > for JCVM then you need to perform optimizations to simplify the > conditionals.I get it. My point however is larger than just this one example. You can't say "just use a null pointer" for every possible optimization based on front end information. Maybe that happens to work for active class checks, but it's not a general answer. Requoting myself: > I.e., my question is the more general one: > how do optimizations that are specific to the front-end language get > done? How does the front-end "secret knowledge" get passed through > somehow so it can be used for optimization purposes? -Archie __________________________________________________________________________ Archie Cobbs * CTO, Awarix * http://www.awarix.com
On Wed, 2006-04-26 at 09:01 -0500, Archie Cobbs wrote:> Requoting myself: > > > I.e., my question is the more general one: > > how do optimizations that are specific to the front-end language get > > done? How does the front-end "secret knowledge" get passed through > > somehow so it can be used for optimization purposes? > > -ArchieArchie, The quick answer is that it doesn't. The front end is responsible for having its own AST (higher level representation) and running its own optimizations on that. From there you generate the LLVM intermediate representation (IR) and run on that whatever LLVM optimization passes are appropriate for your language and the level of optimization you want to get to. The "secret knowledge" is retained by the language's front end. However, your front end is in control of two things: what LLVM IR gets generated, and what passes get run on it. You can create your own LLVM passes to "clean up" things that you generate (assuming there's a pattern). We have tossed around a few ideas about how to retain front-end information in the bytecode. The current Annotation/Annotable construct in 1.7 is scheduled for removal in 1.8. There are numerous problems with it. One option is to just leave it up to the front end. Another option is to allow a "blob" to be attached to the end of a bytecode file. On another front, you might be interested in http://hlvm.org/ where a few interested LLVM developers are thinking about just these kinds of things and ways to bring high level support to the excellent low level framework that LLVM provides. Note: this effort has just begun, so don't expect to find much there for another few weeks. Reid. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20060426/79558b9c/attachment.sig>
On Wed, 26 Apr 2006, Archie Cobbs wrote:> Requoting myself: > >> I.e., my question is the more general one: >> how do optimizations that are specific to the front-end language get >> done? How does the front-end "secret knowledge" get passed through >> somehow so it can be used for optimization purposes?The general answer is: language specific intrinsics or recognized run-time API functions. -Chris -- http://nondot.org/sabre/ http://llvm.org/