Displaying 20 results from an estimated 3000 matches similar to: "[LLVMdev] Wondering how best to run inlining on a single function."
2009 May 27
0
[LLVMdev] Wondering how best to run inlining on a single function.
On May 26, 2009, at 3:15 PM, Jeffrey Yasskin wrote:
> In Unladen Swallow we (intend to) compile each function as we
> determine it's hot. To "compile" a function means to translate it from
> CPython bytecode to LLVM IR, optimize the IR using a
> FunctionPassManager, and JIT the IR to machine code. We'd like to
> include inlining among our optimizations. Currently
2009 Mar 26
3
[LLVMdev] OT: Python on LLVM
Hi,
Slightly off-topic (as it's not directly about using or developing LLVM):
http://code.google.com/p/unladen-swallow/wiki/ProjectPlan
"Our long-term proposal is to replace CPython's custom virtual machine
with a JIT built on top of LLVM, while leaving the rest of the Python
runtime relatively intact."
Just curious, has anyone here heard more about this project?
Regards,
2009 Mar 26
0
[LLVMdev] OT: Python on LLVM
On Thu, Mar 26, 2009 at 8:20 AM, Paul Melis <llvm at assumetheposition.nl> wrote:
> Hi,
>
> Slightly off-topic (as it's not directly about using or developing LLVM):
>
> http://code.google.com/p/unladen-swallow/wiki/ProjectPlan
>
> "Our long-term proposal is to replace CPython's custom virtual machine
> with a JIT built on top of LLVM, while leaving the
2014 Aug 04
3
[LLVMdev] Matching up inlined basic blocks with original basic blocks.
Hello All,
I have some data tied to the basic blocks in a function, and after inlining
that function, I'd like to recover that data in the inlined version. Is
there some way to match up the inlined version of the function with the
original basic blocks?
Thanks,
Jeremy
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2010 Nov 09
0
[LLVMdev] Calling PassManager on previously JITed Modules
Hi,
I found the following wiki page in the Unladen Swallow project:
http://code.google.com/p/unladen-swallow/wiki/CodeLifecycle
This would appear to answer my question. Could someone confirm for me
if it's definitely unsafe to attempt to optimise/JIT any Modules while
a different thread is currently executing a JITed function which has
been generated from them? Or am I just missing
2010 Nov 08
3
[LLVMdev] Calling PassManager on previously JITed Modules
Hi,
Has anyone had any success with running different PassManagers on
llvm::Modules they've already JITed and are executing?
In detail:
1) getting the IR, in form of an llvm::Module
2) calling PassManager->run() on the module
3) calling getFunction() and getPointerToFunction() to JIT the module
4) executing the JITed code using the function pointer received in step 3
and then what I
2009 Nov 05
2
[LLVMdev] Debug Information for LLVM 2.6 and TOT
Devang Patel wrote:
> Hi John,
>
> On Wed, Nov 4, 2009 at 12:04 PM, John Criswell <criswell at uiuc.edu> wrote:
>
>> Dear All,
>>
>> 1) I recall reading somewhere that a few optimizations in LLVM 2.6 strip
>> away debug information when such information interferes with
>> optimization. Is this correct,
>>
>
> Yes.
>
>
2010 Jan 26
1
[LLVMdev] gdb on Mach-O
According to http://llvm.org/docs/DebuggingJITedCode.html only ELF objects are supported for gdb support of
JIT generated code. Is this still true, or are Mach-O files now supported in 2.7 trunk?
Thanks in advance
Garrison
2011 Oct 25
1
[LLVMdev] Using a FunctionPass inside a CallGraphSCCPass
Hi,
I am writing a CallGraphSCCPass that uses LoopInfo which is a FunctionPass.
However, doing so results in the following error.
****
Unable to schedule 'Natural Loop Information' required by '......'
****
Google led me to this page, where Devang Patel suggests implementing the
addLowerLevelRequiredPasses in CGPassManager in a manner similar to
MPPassManager.
2010 Aug 11
4
[LLVMdev] Optimization pass questions
I have a whole slew of questions about optimization passes. Answers to any or all would be extremely helpful:
How important are doInitialization/doFinalization? I can't detect any difference if I use them or not. Why does the function pass manager have doInitialization/doFinalization, but the global pass manager doesn't? If I am applying the function passes to many functions, do I
2010 Aug 12
0
[LLVMdev] Optimization pass questions
Larry,
On Wed, Aug 11, 2010 at 4:55 PM, Larry Gritz <lg at larrygritz.com> wrote:
> I have a whole slew of questions about optimization passes. Answers to any
> or all would be extremely helpful:
>
> How important are doInitialization/doFinalization?
Most of the passes do not use them.
> I can't detect any difference if I use them or not.
Say, if you are writing
2020 Nov 09
2
Inliner in legacy pass manager
Hi,
In following link:
https://www.youtube.com/watch?reload=9&v=6X12D46sRFw
They have specified that the inliner can't use DomTree/LoopInfo/MemorySSA
analysis.
1. What's the reason for this?
2. Why can't we do it using getAnalysisUsage() construct?
3. Can inline use this information in the new Pass Manager?
4. What all information can we derive from DomTree to be of help to
2010 May 27
4
[LLVMdev] Deep JIT specialization
Hi all,
I'm attempting to use LLVM for run-time code specialization, but I'm facing
a performance hurdle. I'm currently performing the specialization during the
AST to LLVM IR translation, but unfortunately this leads to relatively slow
recompiles as LLVM has to perform all the heavy (optimization) passes over
and over again.
So I was hoping that by first creating unspecialized LLVM
2015 Jun 17
3
[LLVMdev] Path forward on profile guided inlining?
I would like to start prototyping changes towards profile guided
inlining. Before doing so, I wanted to get feedback from the community
on the appropriate approach. I'm going to layout a strawman proposal,
but I'm open to other ideas on how to approach the problem.
Depending on what approach we settle on, I *may* be able to commit
resources to actually implement this in the near
2006 Jan 10
3
[LLVMdev] passmanager, significant rework idea...
The patch below basically hammers out some ideas as to where I'd like
to take the passmanager in LLVM. I've tried thinking things through,
but I'm still a n00b, so some criticism would be more than welcome. =)
Starting from line 191 down. If you're wondering why I created a
patch, well that's because I found thinking in passmanagert.h the most
productive.
--
Regards.
2013 May 08
2
[LLVMdev] Concerning http://llvm.org/ProjectsWithLLVM
Not sure, but it seems the page contains a number of out-of-date entries:
Pypy => pypy.org (link stale) plus: there is no llvm backend for pypy at the moment (although LLVM backends have been attempted a number of times, all seem to have failed)
Unladen Swallow => not being developed since 2011 (http://qinsb.blogspot.com/2011/03/unladen-swallow-retrospective.html)
TIA,
Andreas
The
2012 Jul 11
1
[LLVMdev] Introductions to everyone and a call for Python-LLVM enthusiasts
Hello Duncan,
> thanks for your interesting email. Do you understand why PyPy is no longer
> using LLVM, and why Unladen Swallow died? Does LLVM need to be improved in
> some way?
The answers to all these questions are linked: LLVM is not fast enough
(for a JIT). Of course this is not the whole story, but it is the
LLVM-relevant part.
Let's have a look at some random performance
2011 Jun 15
4
[LLVMdev] Connection llvm ir
I want to connect each llvm ir
for example:
1. Turn C/C++ language into C_llvmIR assembly language using Clang
2. Turn Fortran language into Fortran_llvmIR assembly language using
Dragonegg
3. Turn Python language into Python_llvmIR assembly language using
Unladen-Swallow
4. Connect each llvm IR
Is this possible?
Wonjun, Choi
-------------- next part --------------
An HTML attachment was
2012 Jul 11
0
[LLVMdev] Introductions to everyone and a call for Python-LLVM enthusiasts
Hi Travis,
...
> LLVM is still very relevant to Python because of projects like Numba --- but you
> should know that PyPy is no longer using LLVM and Unladen Swallow has not been
> worked on for several years. The future of LLVM and Python I think is very
> bright --- especially for the scientific and data-analysis user-base.
thanks for your interesting email. Do you understand
2012 Nov 09
2
[LLVMdev] Inlining bitcast functions...
I've got a call instruction:
call void bitcast (void (%4 addrspace(1)*, <2 x i32>, <4 x float>)* @_Z12write_imagefPU3AS110_image2d_tDv2_iDv4_f to void (%9 addrspace(1)*, <2 x i32>, <4 x float>)*)(%9 addrspace(1)* %dstimg, <2 x i32> %28, <4 x float> %26) nounwind
%4 and %9 are both (stripped) opaque structs.
InlineFunction() does not inline this because