Displaying 20 results from an estimated 10000 matches similar to: "LLVM and XCode 7.0.0"
2013 Dec 06
2
[LLVMdev] PTX generation examples?
OK, fine -- an example of MCJIT that sets up for PTX JIT would also be helpful.
On Dec 6, 2013, at 12:32 PM, Eli Bendersky <eliben at google.com> wrote:
>
> You'll have to switch to MCJIT for this purpose. Legacy JIT doesn't emit PTX.
>
> Eli
--
Larry Gritz
lg at larrygritz.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2011 Nov 10
3
[LLVMdev] Optimization passes
Is there a succinct way I can get the full list of which optimization passes are applied, and in what order, for standard clang -O1, -O2, -O3?
--
Larry Gritz
lg at larrygritz.com
2013 Dec 09
1
[LLVMdev] PTX generation examples?
Ah, that's helpful. I knew that I'd need to end up with PTX as text, not a true binary, but I would have figured that it would come out of MCJIT. Thanks for helping to steer me away from the wrong trail.
OK, one more question: Can anybody clarify the pros and cons of generating the PTX through the standard LLVM distro, versus using the "libnvvm" that comes with the Cuda SDK?
2016 Mar 08
2
Deleting function IR after codegen
Thanks for the pointer, it's always helpful to be able to see how another project solved similar problems.
> On Mar 8, 2016, at 11:24 AM, Andy Ayers <andya at microsoft.com> wrote:
>
> FWIW, LLILC (https://github.com/dotnet/llilc) uses MCJIT with a custom memory manager to hold onto the binary bits and discard the rest.
>
> As far as I know it doesn't leak, though
2013 Dec 09
0
[LLVMdev] PTX generation examples?
There is no MCJIT support for PTX at the moment (mainly because PTX does
not have a binary format, and is not machine code per se).
To generate PTX at run-time, you just set up a standard codegen pass
manager like you would like an off-line compiler. The output will be a
string buffer that contains the PTX, which you can load into the CUDA
runtime.
As for determining if PTX support is compiled
2016 May 19
2
External function resolution: MCJIT vs ORC JIT
Thanks so much! This seems to do the trick. I would have spun my wheels for a long time before discovering all of this, wow.
Do I even want to know what additional chickens need to be sacrificed to get this to work on Windows?
-- lg
> On May 18, 2016, at 1:52 PM, Lang Hames <lhames at gmail.com> wrote:
>
> Hi Larry,
>
> You're basically there, but you're hitting
2014 Jan 21
4
[LLVMdev] MCJIT versus getLazyBitcodeModule?
Thanks for the pointers.
Am I correct in assuming that putting the precompiled bitcode into a second module and linking (or using the object caches) would result in ordinary function calls, but would not be able to inline the functions?
-- lg
On Jan 21, 2014, at 11:55 AM, Kaylor, Andrew <andrew.kaylor at intel.com> wrote:
> I would say that the incompatibility is by design. Not
2010 Aug 11
4
[LLVMdev] Optimization pass questions
I have a whole slew of questions about optimization passes. Answers to any or all would be extremely helpful:
How important are doInitialization/doFinalization? I can't detect any difference if I use them or not. Why does the function pass manager have doInitialization/doFinalization, but the global pass manager doesn't? If I am applying the function passes to many functions, do I
2016 Mar 08
3
Deleting function IR after codegen
YES. My use of LLVM involves an app that JITs program after program and will quickly swamp memory if everything is retained. It is crucial to aggressively throw everything away but the functions we still need to execute.
I've been faking it with old JIT (llvm 3.4/3.5) by using a custom subclass of JITMemoryManager that squirrels away the jitted binary code so that when I free the Modules,
2010 Aug 16
3
[LLVMdev] Module management questions
I have an app that's dynamically generating and JITing code, and will have many such cases in the course of its run. They need to be JITed separately, because I need to execute the first batch before I know what the second one will be. Of course, I *still* need to execute the first after the need for the second arises, so I need to retain the JITed machine code for all the functions I
2013 Dec 06
2
[LLVMdev] PTX generation examples?
I have an app that uses LLVM API calls from C++ to generate IR and JIT it for x86 (for subsequent live execution). I'm still using the old JIT, for what it's worth.
I want to modify it (for prototype/experimental purposes for now) to JIT PTX (into a big string buffer?).
Docs are sketchy. I can wade through it and figure it out by trial and error, but would be so very happy if somebody
2014 Jan 21
2
[LLVMdev] MCJIT versus getLazyBitcodeModule?
This is sounding rather like getLazyBitcodeModule is simply incompatible with MCJIT. Can anybody confirm that this is definitely the case? Is it by design, or by omission, or bug?
Re your option #1 and #2 -- sorry for the newbie questions, but can you point me to docs or code examples for how the linking or object caching should be achieved? If I do either of these rather than seeding my
2010 Aug 20
2
[LLVMdev] Module management questions
On Aug 18, 2010, at 10:24 AM, Reid Kleckner wrote:
> You can free the machine code yourself by saying
> EE->freeMachineCodeForFunction(F) . If you destroy the EE, it will
> also free the machine code.
Thanks, but unfortunately, this is exactly the opposite of what I want to do. I need to retain the machine code indefinitely, but I want to free all possible other resources that are
2016 May 20
0
External function resolution: MCJIT vs ORC JIT
Hi Larry,
Thanks so much! This seems to do the trick. I would have spun my wheels for
> a long time before discovering all of this, wow.
No worries. :)
I'll try to keep this in mind and make sure I address it in future
Kaleidoscope tutorial chapters - these issues tripped me up the first time
I encountered them too.
Do I even want to know what additional chickens need to be sacrificed
2016 May 17
3
External function resolution: MCJIT vs ORC JIT
When using ORC JIT, I'm having trouble with external function resolution (that is, of a function defined in the app, with C linkage).
I add a declaration for the function to my IR, and when I use MCJIT, it finds it and all is well, But when I use ORC JIT (I *think* correctly, at least it closely matches what I see in the tutorial), I get an LLVM error, "Program used external function
2016 May 22
1
External function resolution: MCJIT vs ORC JIT
>> llvm::sys::DynamicLibrary::LoadLibraryPermanently(nullptr)
This is one is a bit tricky and hard to find.
I spent quiet some time digging into MC and ORC JIT execution engines trying to find what makes them work.
The problem is that this trick (LoadLibraryPermanently) happens inside of EngineBuilder, despite that the functionality belongs to a JIT engine itself, not to the builder.
I
2014 Jan 20
2
[LLVMdev] MCJIT versus getLazyBitcodeModule?
I'm having a problem with MCJIT (in LLVM 3.3 and 3.4), in which it's not resolving symbol mangling in a precompiled bitcode in the same way as old JIT. It's possible that it's just my misunderstanding. Maybe somebody can spot my problem, or identify it as an MCJIT bug.
Here's my situation, in a nutshell:
* I am assembling IR and JITing in my app. The IR may potentially make
2014 Sep 03
2
[LLVMdev] Questions on the llvm 'vector' types and resulting SIMD instructions
If I generate IR using 'vector' types, for example, if my code assembles IR like this:
define <4 x float> @simd_mul(<4 x float>, <4 x float>) {
%3 = fmul <4 x float> %0, %1
ret <4 x float> %3
}
I assume that when I JIT, it will generates the best SIMD instructions available on the host it's running on? For example, when running on a
2013 Dec 09
3
[LLVMdev] [RFC] MCJIT usage models
Another usage case, slightly different than your #1:
6. Dynamic code generation (not interactive with respect to the source code)
- app generates code IR which is compiled as needed for execution
(important difference: no waiting for a typing human in the loop,
so very different expectations about responsiveness and bottlenecks)
- compilation speed IS critical (because many such
2010 Aug 18
2
[LLVMdev] Module management questions
On Aug 17, 2010, at 10:11 AM, Owen Anderson wrote:
> In principle this ought to work, if you're careful. Are you sure you're not generating code that calls into functions that got totally inlined away?
How would I know?
> Are you running the Verifier pass at regular intervals?
Yes, both before and after the set of optimization passes.
So let me clarify what I'm doing: I