Displaying 7 results from an estimated 7 matches for "performloadpre".
[LLVMdev] Register pressure mechanism in PRE or Smarter rematerialization/split/spiller/coalescing ?
2015 Jul 15
4
[LLVMdev] Register pressure mechanism in PRE or Smarter rematerialization/split/spiller/coalescing ?
...a8a..a3387e3 100644
--- a/lib/Transforms/Scalar/GVN.cpp
+++ b/lib/Transforms/Scalar/GVN.cpp
@@ -1767,7 +1767,7 @@ bool GVN::processNonLocalLoad(LoadInst *LI) {
}
// Step 4: Eliminate partial redundancy.
- if (!EnablePRE || !EnableLoadPRE)
+ if (!EnableLoadPRE)
return false;
return PerformLoadPRE(LI, ValuesPerBlock, UnavailableBlocks);
This will disable Scalar PRE without disabling load PRE.
(note, again, however, that load PRE can create exactly the same GEP
situation you are referring to)
2018 Sep 14
2
Generalizing load/store promotion in LICM
For doing PRE on the load, it looks like there’s only two things stopping GVN PRE from doing it:
* GVN::PerformLoadPRE doesn’t hoist loads that are conditional. Probably this can be overcome with some kind of
heuristic that allows it to happen in loops when the blocks where a load would have to be inserted are outside
the loop.
* IsFullyAvailableInBlock goes around the loop and double-counts the entry-edg...
[LLVMdev] Register pressure mechanism in PRE or Smarter rematerialization/split/spiller/coalescing ?
2015 Jul 17
2
[LLVMdev] Register pressure mechanism in PRE or Smarter rematerialization/split/spiller/coalescing ?
...GVN.cpp
> +++ b/lib/Transforms/Scalar/GVN.cpp
> @@ -1767,7 +1767,7 @@ bool GVN::processNonLocalLoad(LoadInst *LI) {
> }
>
> // Step 4: Eliminate partial redundancy.
> - if (!EnablePRE || !EnableLoadPRE)
> + if (!EnableLoadPRE)
> return false;
>
> return PerformLoadPRE(LI, ValuesPerBlock, UnavailableBlocks);
>
>
>
>
> This will disable Scalar PRE without disabling load PRE.
>
>
> (note, again, however, that load PRE can create exactly the same GEP situation you are referring to)
>
2017 Jan 13
4
Wrong code bug after GVN/PRE?
...ough so I'm not sure if I
should just write a PR about it a let someone who knows the code look at
it instead.
Anyway, for the bug to trigger I need to run the following passes in the
same opt invocation:
-sroa -instcombine -simplifycfg -instcombine -gvn
The problem seems to be that GVN::PerformLoadPRE triggers and I see a
GVN REMOVING PRE LOAD: %_tmp79 = load i16, i16* %_tmp78, align 2
printout.
If I instead first run
-sroa -instcombine -simplifycfg -instcombine
and then
-gvn
I don't get the
GVN REMOVING PRE LOAD
printout, and the resulting code looks ok to me.
Is this expect...
2017 Jan 13
2
Wrong code bug after GVN/PRE?
...t it a let someone who knows the code look at it
>> instead.
>>
>> Anyway, for the bug to trigger I need to run the following passes in the
>> same opt invocation:
>> -sroa -instcombine -simplifycfg -instcombine -gvn
>>
>> The problem seems to be that GVN::PerformLoadPRE triggers and I see a
>>
>> GVN REMOVING PRE LOAD: %_tmp79 = load i16, i16* %_tmp78, align 2
>>
>> printout.
>>
>> If I instead first run
>>
>> -sroa -instcombine -simplifycfg -instcombine
>>
>> and then
>>
>> -gvn
>>...
2018 Sep 13
3
Generalizing load/store promotion in LICM
(minor inline additions)
On 09/13/2018 01:51 AM, Chandler Carruth wrote:
> Haven't had time to dig into this, but wanted to add +Alina Sbirlea
> <mailto:asbirlea at google.com> to the thread as she has been working on
> promotion and other aspects of LICM for a long time here.
Thanks!
> On Wed, Sep 12, 2018 at 11:41 PM Philip Reames
> <listmail at philipreames.com
[LLVMdev] Register pressure mechanism in PRE or Smarter rematerialization/split/spiller/coalescing ?
2015 Jul 15
3
[LLVMdev] Register pressure mechanism in PRE or Smarter rematerialization/split/spiller/coalescing ?
Hi, Daniel:
Thanks a lot for detailed background information, we are willing to provide the right fix, however it will take time, do you mind if you forward me the discussion you had 5 months ago? I may not be able to access it since I only joined ellvmdev list this week.
I did some performance measurement last night, some of our critical benchmark degraded up to 30% with your patch, so we have