Displaying 20 results from an estimated 800 matches similar to: "CallSiteSplitting and musttail calls"
2018 Feb 24
0
CallSiteSplitting and musttail calls
Update:
I was able to make progress on it today ( See
https://reviews.llvm.org/D43729 ). Apparently my problems were:
* Iterating through the instruction/block list after erasing
block/instruction
* Trying to split block after removing one predecessor
Regarding the latter, it appears that semantics of
`DuplicateInstructionsInSplitBetween` change significantly in such case,
and it starts to loop
2018 Feb 27
2
CallSiteSplitting and musttail calls
I think you realized this now, but to be clear:
More likely, you've found some bugs.
Unfortunately, not all of these utilities have good unit tests (though they
should!).
This would not be the first set of bugs people have found wrt to very
start/end of blocks, or bb == predbb issues.
On Sat, Feb 24, 2018 at 12:58 PM, Fedor Indutny via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
2018 Feb 28
0
CallSiteSplitting and musttail calls
Hi,
On 27/02/2018 16:32, Daniel Berlin via llvm-dev wrote:
> I think you realized this now, but to be clear:
> More likely, you've found some bugs.
> Unfortunately, not all of these utilities have good unit tests (though
> they should!).
>
> This would not be the first set of bugs people have found wrt to very
> start/end of blocks, or bb == predbb issues.
>
2018 Mar 02
1
is it allowed to use musttail on llvm.coro.resume?
It makes sense that you would be able to do this:
%save1 = llvm.coro.save()
%unused = musttail call llvm.coro.resume(%some_handle)
%x = llvm.coro.suspend()
...
But the docs for musttail say:
> The call must immediately precede a ret instruction, or a pointer bitcast
followed by a ret instruction.
Should this be amended to allow a musttail to be followed by
llvm.coro.suspend() ?
Regards,
2017 Jun 24
1
musttail & alwaysinline interaction
Consider this program:
@globalSideEffect = global i32 0
define void @tobeinlined() #0 {
entry:
store i32 3, i32* @globalSideEffect, align 4
musttail call fastcc void @tailcallee(i32 3)
ret void
}
define fastcc void @tailcallee(i32 %i) {
entry:
call void @tobeinlined()
ret void
}
attributes #0 = { alwaysinline }
Clearly, if this is processed with opt -alwaysinline, it will lead
2016 Nov 24
3
llvm optimizer turning musttail into tail
I've got some calls like:
musttail call void bitcast (i32 (i32, i8*, %Type*)* @MyMethod to void
(i32, i8*)*)(i32 %0, i8* %1)
ret void
Into something like:
%8 = tail call i32 @MyMethod(i32 %0, i8* %1, %Type* null)
ret void
I realize I'm losing a parameter there, but this is an interface jump
trick I use and relies on the end code being a 'jmp' (x86). I realize i
can probably
[LLVMdev] PSA: Perfectly forwarding thunks can now be expressed in LLVM IR with musttail and varargs
2014 Sep 02
2
[LLVMdev] PSA: Perfectly forwarding thunks can now be expressed in LLVM IR with musttail and varargs
I needed this functionality to solve http://llvm.org/PR20653, but it
obviously has far more general applications.
You can do it like this:
define i32 @my_forwarding_thunk(i32 %arg1, i8* %arg2, ...) {
... ; define new_arg1 and new_arg2
%r = musttail call i32 (i32, i8*, ...)* @the_true_target(i32 %new_arg1,
i8* %new_arg2, ...)
ret i32 %r
}
declare i32 @the_true_target(i32, i8*, ...)
The
2016 Nov 27
3
llvm optimizer turning musttail into tail
r287955 seems like it might be related.
-- Sean Silva
On Sat, Nov 26, 2016 at 4:06 PM, Sean Silva <chisophugis at gmail.com> wrote:
> This sounds buggy to me. What pass is doing this?
>
> -- Sean Silva
>
> On Thu, Nov 24, 2016 at 5:39 AM, Carlo Kok via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
>>
>> I've got some calls like:
>>
[LLVMdev] PSA: Perfectly forwarding thunks can now be expressed in LLVM IR with musttail and varargs
2014 Oct 09
2
[LLVMdev] PSA: Perfectly forwarding thunks can now be expressed in LLVM IR with musttail and varargs
On 8 Oct 2014, at 18:19, Reid Kleckner <rnk at google.com> wrote:
> The one target I know about where varargs are passed differently from normal arguments is aarch64-apple-ios/macosx. After thinking a bit more, I think this forwarding thunk representation works fine even on that target. Typically a forwarding thunk is called indirectly, or at least through a bitcast, so the LLVM IR call
2019 Dec 12
3
Adding custom callback function before/after passes
Hello Fedor.
Thank you for the information.
I made a simple patch that exposes PassInstrumentationCallback so
llvmGetPassPluginInfo can use it: https://reviews.llvm.org/D71086 . Would
this change make sense?
Thanks,
Juneyoung Lee
On Thu, Dec 12, 2019 at 12:44 AM Fedor Sergeev <fedor.sergeev at azul.com>
wrote:
>
>
> On 12/3/19 8:01 PM, Juneyoung Lee via llvm-dev wrote:
>
>
2018 Mar 15
2
Commit module to Git after each Pass
On 03/15/2018 01:32 PM, Fedor Sergeev via llvm-dev wrote:
> For this to be really usable in this setup we need additionally to:
> - extend -print-module-scope to cover basic block passes
> - introduce a clear way to separate module IRs as those are being
> printed by -print-after-all
>
> But yes, it should work, and a wrapper that pipes to git fast-import
> seems to be
2018 Sep 28
3
Porting Pass to New PassManager
Is there a reason for why `-asan` and `-asan-module` can be mixed but
Function passes and Module passes with the new PM can't be mixed?
- Leo
On Thu, Sep 27, 2018 at 3:21 AM Fedor Sergeev <fedor.sergeev at azul.com> wrote:
>
> On 09/27/2018 12:25 PM, Philip Pfaffe wrote:
>>
>> `opt < %s -passed='asan' -asan-module -S`
>
> asan-module is another
2018 Mar 15
2
Commit module to Git after each Pass
Does git-commit-after-all print correctly after all the passes? Maybe I
messed it up and it skip some passes, therefore having less to do?
Either that, or piping has a higher cost than writing to file. Looks like
it surprisingly spends much less time in system more when going through
file. Maybe that's because the file is consistently around the same size
and is mmapped into memory
2018 Mar 14
2
Commit module to Git after each Pass
The print-module-after-all type of option exists in upstream:
-print-module-scope - When printing IR for print-[before|after]{-all} always print a module IR
commit 7d160f714357f6784ead669ce516e94991c12e5a
Author: Fedor Sergeev <fedor.sergeev at azul.com<mailto:fedor.sergeev at azul.com>>
Date: Fri Dec 1 17:42:46 2017 +0000
IR
2018 Mar 15
0
Commit module to Git after each Pass
Hmm...
I tried Alexandre's fix from D44244 and surprisingly it appears that
just using -print-module-scope w/o
any additional git actions is waaaay slower on my testcase than
-git-commit-module-all.
Hell, even a plan -print-after-all is slower:
] time R/bin/opt -O3 some-ir.ll -disable-output -git-commit-after-all
2>/dev/null
real 0m8.041s
user 0m7.133s
sys 0m0.936s
] time
2018 Sep 26
2
OptBisect implementation for new pass manager
But they're deeply connected. I debug codegen problems all the time.
That opt-bisect doesn't work with codegen is really unfortunate.
If opt-bisect should work with codegen then we need to think about how
codegen will work with the new PM.
I agree that whether or not the new PM becomes default is somewhat
orthogonal but eventually it will and at that point I hope we have a
functioning
2018 Sep 25
2
Porting Pass to New PassManager
Frontends _are_ using PassBuilder, but they need to hook into the default
pipeline creation to insert the sanitizer passes.
On Tue, Sep 25, 2018 at 12:15 PM Fedor Sergeev <fedor.sergeev at azul.com>
wrote:
> Hmm... frontends should be using PassBuilder anyway.
> And if they are using PassBuilder then they are using PassRegistry.def as
> well - all the
>
2018 Mar 15
4
Commit module to Git after each Pass
git-commit-after-all solution has one serious issue - it has a hardcoded
git handling which
makes it look problematic from many angles (picking a proper git,
selecting exact way of storing information, creating repository,
replacing the file etc etc).
Just dumping information in a way that allows easy subsequent machine
processing
seems to be a more flexible, less cluttered and overall clean
2018 Nov 08
2
Completeness of -print-after-all
Fedor,
Yes that is what happens in my case that the loop is fully unrolled and hence ‘removed’.
My objection though is that there is still IR that could be dumped (i.e. the function containing the loop that was removed or the entire module) and that is what I want to have dumped after each pass when I specify -print-after-all. Of course there may be certain implementation details that could make
2018 Mar 15
0
Commit module to Git after each Pass
For this to be really usable in this setup we need additionally to:
- extend -print-module-scope to cover basic block passes
- introduce a clear way to separate module IRs as those are being
printed by -print-after-all
But yes, it should work, and a wrapper that pipes to git fast-import
seems to be the best way to handle it.
regards,
Fedor.
On 03/15/2018 12:31 AM, Daniel Neilson via