Displaying 20 results from an estimated 20 matches for "simpleinliner".
2016 Apr 16
2
[TSAN] LLVM statistics and pass initialization trigger race detection
...x0001113619e8 by thread T13:
#0 __tsan_atomic32_compare_exchange_val <null>:568340296 (libclang_rt.tsan_osx_dynamic.dylib+0x00000003e7aa)
#1 llvm::sys::CompareAndSwap(unsigned int volatile*, unsigned int, unsigned int) Atomic.cpp:52 (libLTO.dylib+0x000000511914)
#2 llvm::initializeSimpleInlinerPass(llvm::PassRegistry&) InlineSimple.cpp:82 (libLTO.dylib+0x000000ab2b55)
#3 (anonymous namespace)::SimpleInliner::SimpleInliner() InlineSimple.cpp:50 (libLTO.dylib+0x000000ab2e8e)
#4 (anonymous namespace)::SimpleInliner::SimpleInliner() InlineSimple.cpp:49 (libLTO.dylib+0x000000ab2d19...
2016 Feb 25
0
Use DominatorTree from CallGraphSCCPass
Hello,
I'm trying to improve SimpleInliner to use information given by __builtin_expect instruction (It would be better to not to inline if a call instruction is unlikely to be executed). The problem here is that it is not possible to compute control dependency relationship between the annotated branch instruction and callsites using PostDo...
2005 Jul 04
2
[LLVMdev] function inlining threshold ?
I am using llvm for source-to-source inlining. So I did:
% llvm-gcc file_a.c file_b.c ... file_n.c -o file
% opt -inline -inline-threshold=1000 < file.bc | llc -march=c > outfile.c
Can anyone tell me how llvm determines if a function should be inlined,
and what roll does "inline-threshold" play ? (Does the example mean that
if the function body has fewer than 1000 instructions,
2005 Jul 05
0
[LLVMdev] function inlining threshold ?
...the inliner implemented in LLVM is
InlineSimple.cpp in the same directory, which adds cost to functions if
they have recursive calls, allocas, and other features, but at the end,
you'll notice that it weighs each instruction as 5 with each basic block
as 20.
I've omitted many details, see SimpleInliner::getInlineCost() in
llvm/lib/Transforms/IPO/InlineSimple.cpp for complete calculation.
--
Misha Brukman :: http://misha.brukman.net :: http://llvm.cs.uiuc.edu
2008 May 08
0
[LLVMdev] What's the BasicInliner for?
Hi all,
I was looking around in the lib/Transforms/Utils dir and ran into the
BasicInliner class. Seeing a large similarity between it and the SimpleInliner
Pass, I grep'd around for BasicInliner, but it doesn't seem to be used.
The difference between them seems that SimpleInliner is a pass that looks at
all functions, while BasicInliner is not a pass and looks only at explicitely
given functions. They do however both declare an inline-thresho...
2013 Apr 24
3
[LLVMdev] [PROPOSAL] per-function optimization level control
...state there are two strategies available in LLVM
for function inlining:
1) Inline Always (by default only used at -O0 and -O1);
2) Inline Simple (OptLevel >= 2).
The Inline Always strategy can be used in place of the Inline Simple
if specifically requested by the user.
The constructor of SimpleInliner (see
"lib/Transform/IPO/InlineSimple.cpp")
requires that we pass a Threshold value as an argument to the constructor.
In general, the threshold would be set by the front-end (it could
be either clang or bugpoint or opt etc.) according to both the OptLevel
and
the SizeLevel.
In order...
2008 May 06
2
[LLVMdev] [PATCH] Split LoopUnroll pass into mechanism and policy
Hi,
the attached patch splits the loop unroll pass into a LoopUnroll superclass
that implements the unrolling mechanism, and a SimpleLoopUnroll subclass
implementing the current policy. This split is modeled after the split between
Inliner and SimpleInliner.
The superclass currently still finds out the TripCount and TripMultiple, and
passes those, together with the Loop in question, to a policy method.
Currently, TripMultiple is not yet used in the SimpleLoopUnroll, but I can
imagine that this might change, so I included it already.
Currently, there...
2013 Jul 28
0
[LLVMdev] IR Passes and TargetTransformInfo: Straw Man
...+ Builder->populatePostIPOPM(*LPM);
}
Index: lib/Transforms/IPO/InlineSimple.cpp
===================================================================
--- lib/Transforms/IPO/InlineSimple.cpp (revision 187135)
+++ lib/Transforms/IPO/InlineSimple.cpp (working copy)
@@ -72,6 +72,10 @@
return new SimpleInliner(Threshold);
}
+Pass *llvm::createTinyFuncInliningPass() {
+ return new SimpleInliner(40);
+}
+
bool SimpleInliner::runOnSCC(CallGraphSCC &SCC) {
ICA = &getAnalysis<InlineCostAnalysis>();
return Inliner::runOnSCC(SCC);
-------------- next part --------------
Index: include/c...
2013 Jul 18
3
[LLVMdev] IR Passes and TargetTransformInfo: Straw Man
Andy and I briefly discussed this the other day, we have not yet got
chance to list a detailed pass order
for the pre- and post- IPO scalar optimizations.
This is wish-list in our mind:
pre-IPO: based on the ordering he propose, get rid of the inlining (or
just inline tiny func), get rid of
all loop xforms...
post-IPO: get rid of inlining, or maybe we still need it, only
2013 Sep 18
0
[LLVMdev] [RFC] Internal command line options should not be statically initialized.
On Sep 17, 2013, at 10:10 AM, Andrew Trick <atrick at apple.com> wrote:
> LLVM's internal command line library needs to evolve. We have an immediate need to build LLVM as a library free of static initializers, but before brute-force fixing this problem, I'd like outline the incremental steps that will lead to a desirable long term solution. We want infrastructure in place to
2013 Sep 17
3
[LLVMdev] [RFC] Internal command line options should not be statically initialized.
On Tue, Sep 17, 2013 at 11:29 AM, Reid Kleckner <rnk at google.com> wrote:
> Wait, I have a terrible idea. Why don't we roll our own .init_array style
> appending section? I think we can make this work for all toolchains we
> support.
>
Andy and I talked about this, but I don't think its worth it. My opinion is:
1. For tool options (the top-level llc, opt, llvm-as
2013 Nov 14
2
[LLVMdev] (Very) small patch for the jit event listener
Hi Andy,
Thanks for the answer. I'm currently reading the internal code of
MCJIT and it's really a great work (I was only using the
ExecutionEngine interface for the moment). So, I agree, all what I
need is already in the code (see below) :)
2013/11/14 Kaylor, Andrew <andrew.kaylor at intel.com>:
> Hi Gaël,
>
> Thank you for the detailed explanation. It's very
2013 Nov 14
0
[LLVMdev] (Very) small patch for the jit event listener
Hi Gaël,
I'm glad to hear that MCJIT looks promising to you.
> I understand the point. Probably that providing a small example that describes how using
> advanced features of MCJIT could help. If I can manage to make MCJIT works with VMKit,
> I'll be happy to send you an example of lazy compilation that highlight some of the features
> of MCJIT.
I'd love to have a
2013 Nov 16
2
[LLVMdev] (Very) small patch for the jit event listener
Hi Andrew (hi all:)),
I perfectly understand the problem of relocation and it's really not a
problem in my case. I'm still trying to make MCJIT runs but I face a
small problem. I have to insert callback to the runtime for functions
provided by vmkit (for example, a gcmalloc function to allocate memory
from the heap). With the old JIT, VMKit simply loads a large bc file
that contains all
2013 Nov 16
0
[LLVMdev] (Very) small patch for the jit event listener
Hump, I think that I have to solution, but I have a new problem (more
serious). For the solution, it's stupid, I have simply loaded the
shared library in ObjectCache::getObject directly in a MemoryBuffer :)
As the linker understand a .o, it understands a .so.
Now, I'm able to compile a module (I call finalizeObject()), I'm able
to find my first generated function pointer, but I'm
2013 Nov 18
2
[LLVMdev] (Very) small patch for the jit event listener
Hi Gaël,
I would guess that MCJIT is probably attempting to load and link the shared library you return from the ObjectCache in the way it would load and link generated code, which would be wrong for a shared library. I know it seems like it should be easier to handle a shared library than a raw relocatable object (and it probably is) but MCJIT doesn't handle that case at the moment. The
2013 Nov 19
0
[LLVMdev] (Very) small patch for the jit event listener
Hi Andrew,
Thank you very much for all your help! So, I have tested without my
shared library (with a relocatable object and without) and still, my
code is not executable. I was testing my code with multiple modules
and I don't now if using multiple modules is fully functional?
Anyway, I'm now allocating a mcjit for each function to be sure. But
now, I have a new problem that comes
2013 Nov 19
1
[LLVMdev] (Very) small patch for the jit event listener
Hi Gaël,
Multiple module support should be fully functional. However, there are some oddities in how MCJIT gets memory ready to execute, particularly if you are using the deprecated getPointerToFunction or runFunction methods. If you use these methods you'll need to call finalizeObject before you execute the code. I've heard reports that there's a bug doing that after adding
2013 Nov 14
0
[LLVMdev] (Very) small patch for the jit event listener
Hi Gaël,
Thank you for the detailed explanation. It's very helpful.
All of the things you describe could be done within MCJIT, but I'm not sure that's where they belong. We had a discussion about lazy function compilation at the LLVM Developers Meeting last week and the consensus among those present was that it would be better to leave this sort of lazy compilation to the MCJIT
2013 Nov 13
3
[LLVMdev] (Very) small patch for the jit event listener
Hi Andrew, hi all,
I already saw that the old jit was (almost) deprecated. So, I'm
currently playing with the new jit and it's look very interesting.
(I'm working locally and I haven't pushed anything new on VMKit
because I'm also changing a little the design vmkit). For the moment,
MCJIT does not work with VMKit (but I haven't yet tested the
safepoint/stackmap patch), I