I was simply surprised because some C++ code I implemented/translated into
LLVM IR ran significantly slower in the JIT than the C++ version. The code
in question was mean to implement the "plus" operator in my scripting
language, and had different behaviors depending on the type of the objects
being added. I expected it to run faster as I was eliminating a call to a
C++ function by generating the LLVM IR for what I wanted to do directly,
essentially inlining the code for the "plus" operator into the code I
was
generating/JITing.
Evan Cheng-2 wrote:>
>
> On Mar 27, 2009, at 2:47 PM, Nyx wrote:
>
>>
>> Hello,
>>
>> Is there a way to control how optimized the x86 code generated by
>> the JIT
>> is? I mean the actual x86 code, not the llvm IR (I know about those
>> optimization passes). I would like to make it as optimized as
>> reasonably
>> possible.
>
> Then the default is what you want. By default, all the codegen
> optimizations are being run.
>
> Evan
>
>>
>>
>> Thank you for your time,
>>
>> - Maxime
>> --
>> View this message in context:
>> http://www.nabble.com/JIT-Optimization-Levels--tp22749693p22749693.html
>> Sent from the LLVM - Dev mailing list archive at Nabble.com.
>>
>> _______________________________________________
>> LLVM Developers mailing list
>> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>
>
--
View this message in context:
http://www.nabble.com/JIT-Optimization-Levels--tp22749693p22775175.html
Sent from the LLVM - Dev mailing list archive at Nabble.com.