I am currently trying to use LLVM (version 3.2) to generate code for a cluster where each node in the cluster will be running a 64-bit X86 chip, but *not* necessarily the same version of the X86 chip throughout the cluster. I am specifying that I want Position-Independent Code produced because the generated machine code may get moved (before it is executed) to a different logical address that where it is built by the JIT Compiler. Furthermore, the generated machine code may get moved to a different node in the cluster before it is executed. Yes, I know that version 3.2 is not the latest and, yes, I know that I should be using MCJIT rather than JIT, but from my reading thus far on version 3.4 and on MCJIT, I don't think it will affect the answer to my question. If I am wrong, I would be happy to be corrected. [When I started the project, version 3.2 was the latest available (and JIT looked easier to use) ... and I haven't yet had sufficient reason to upgrade.] I have been telling the JIT compiler to use the Default level of code optimization. My problem is that I believe I have found that the JIT compiler is doing chip-specific code optimization to make the code run as fast as possible for the chip which is running the compiler. However, the generated machine code may get moved to a different version of the chip when the code gets moved to a different node in the cluster. I believe I can get generic X86 code generated by using an optimization level of NONE, but that may hurt performance too much. So, my question is: Is there an option (using either JIT or MCJIT) where I can specify that I want the generated code to be generic enough to run on any version of a 64-bit X86 chip ... without turning off all optimization? Thanks, Jim Capps -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20140818/a6c5afb5/attachment.html>
You can set what CPU model to generate when you call EngineBuilder::selectTarget. If you're not calling that directly yet, you might have to mull it out of the call to create: auto TM = eb.selectTarget(...) eb.create(TM) hope that helps. On Mon, Aug 18, 2014 at 4:39 PM, Capps, Jim <James.Capps at hp.com> wrote:> I am currently trying to use LLVM (version 3.2) to generate code for a > cluster where each node in the cluster will be running a 64-bit X86 chip, > but *not* necessarily the same version of the X86 chip throughout the > cluster. I am specifying that I want Position-Independent Code produced > because the generated machine code may get moved (before it is executed) to > a different logical address that where it is built by the JIT Compiler. > Furthermore, the generated machine code may get moved to a different node in > the cluster before it is executed. > > Yes, I know that version 3.2 is not the latest and, yes, I know that I > should be using MCJIT rather than JIT, but from my reading thus far on > version 3.4 and on MCJIT, I don’t think it will affect the answer to my > question. If I am wrong, I would be happy to be corrected. [When I > started the project, version 3.2 was the latest available (and JIT looked > easier to use) … and I haven’t yet had sufficient reason to upgrade.] > > I have been telling the JIT compiler to use the Default level of code > optimization. My problem is that I believe I have found that the JIT > compiler is doing chip-specific code optimization to make the code run as > fast as possible for the chip which is running the compiler. However, the > generated machine code may get moved to a different version of the chip when > the code gets moved to a different node in the cluster. I believe I can > get generic X86 code generated by using an optimization level of NONE, but > that may hurt performance too much. > > So, my question is: Is there an option (using either JIT or MCJIT) where I > can specify that I want the generated code to be generic enough to run on > any version of a 64-bit X86 chip … without turning off all optimization? > > Thanks, > Jim Capps > > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >
Keno, With your suggestion, I think I have figured out exactly how to do what I need. It will take a day or two of testing before I know for sure if it working properly, but it certainly looks promising. Thanks, Jim -----Original Message----- From: Keno Fischer [mailto:kfischer at college.harvard.edu] Sent: Monday, August 18, 2014 3:53 PM To: Capps, Jim Cc: llvmdev at cs.uiuc.edu Subject: Re: [LLVMdev] Disabling some JIT optimization? You can set what CPU model to generate when you call EngineBuilder::selectTarget. If you're not calling that directly yet, you might have to mull it out of the call to create: auto TM = eb.selectTarget(...) eb.create(TM) hope that helps. On Mon, Aug 18, 2014 at 4:39 PM, Capps, Jim <James.Capps at hp.com> wrote:> I am currently trying to use LLVM (version 3.2) to generate code for a > cluster where each node in the cluster will be running a 64-bit X86 > chip, but *not* necessarily the same version of the X86 chip > throughout the cluster. I am specifying that I want > Position-Independent Code produced because the generated machine code > may get moved (before it is executed) to a different logical address that where it is built by the JIT Compiler. > Furthermore, the generated machine code may get moved to a different > node in the cluster before it is executed. > > Yes, I know that version 3.2 is not the latest and, yes, I know that I > should be using MCJIT rather than JIT, but from my reading thus far on > version 3.4 and on MCJIT, I don’t think it will affect the answer to my > question. If I am wrong, I would be happy to be corrected. [When I > started the project, version 3.2 was the latest available (and JIT > looked easier to use) … and I haven’t yet had sufficient reason to > upgrade.] > > I have been telling the JIT compiler to use the Default level of code > optimization. My problem is that I believe I have found that the JIT > compiler is doing chip-specific code optimization to make the code run as > fast as possible for the chip which is running the compiler. However, the > generated machine code may get moved to a different version of the chip when > the code gets moved to a different node in the cluster. I believe I can > get generic X86 code generated by using an optimization level of NONE, > but that may hurt performance too much. > > So, my question is: Is there an option (using either JIT or MCJIT) > where I can specify that I want the generated code to be generic > enough to run on any version of a 64-bit X86 chip … without turning off all optimization? > > Thanks, > Jim Capps > > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >
Apparently Analagous Threads
- [LLVMdev] Reminder: Please switch to MCJIT, as the old JIT will be removed soon.
- [LLVMdev] Reminder: Please switch to MCJIT, as the old JIT will be removed soon.
- [LLVMdev] MCJIT handling of linkonce_odr
- [LLVMdev] MCJIT Mach-O JIT debugging
- AVX512 instruction generated when JIT compiling for an avx2 architecture