Displaying 20 results from an estimated 7000 matches similar to: "[LLVMdev] TargetDescription string"
2010 May 27
3
[LLVMdev] TargetDescription string documentation
Hello,
I am trying to find out where the complete documentation for the
TargetDescription string documentation is.
I am reading the tutorial and looking at the sparc backend at the same
time and there are some discrepancies. Therefore the documentation
would be extremely valuable but I can't seem to find it.
In the tutorial it shows the string "E-p:32:32-f128:128:128",
but the real
2010 May 27
0
[LLVMdev] TargetDescription string documentation
Paulo J. Matos wrote:
> Hello,
>
> I am trying to find out where the complete documentation for the
> TargetDescription string documentation is.
> I am reading the tutorial and looking at the sparc backend at the same
> time and there are some discrepancies. Therefore the documentation
> would be extremely valuable but I can't seem to find it.
>
> In the tutorial it
2015 Dec 09
2
persuading licm to do the right thing
I'm trying to make the IR "better", in a machine-independent fashion,
without having to do any lowering.
I've written code that rewrites GEPs as simple adds and multiplies,
which helps a lot, but there's still some sort of re-canonicalization
that's getting in my way. Is there perhaps a way to suppress it?
Thanks,
Preston
On Wed, Dec 9, 2015 at 7:47 AM, Mehdi Amini
2015 Dec 09
2
persuading licm to do the right thing
I suppose your view is reasonable, and perhaps common.
My own "taste" has always preferred machine-independent code
that is as simple as possible, so GEPs reduced to nothing more than an
add, etc, i.e., quite risc-like. Then optimize it to reduce the total number
of operations (as best we can), then raise the level during instruction
selection, taking advantage of available instructions.
2015 Dec 09
2
persuading licm to do the right thing
On some targets with limited addressing modes,
getting that 64-bit relocatable but loop-invariant value into a register
requires several instructions. I'd like those several instruction outside
the loop, where they belong.
Yes, my experience is that something (I assume instcombine) recanonicalizes.
Thanks,
Preston
On Tue, Dec 8, 2015 at 11:21 PM, Mehdi Amini <mehdi.amini at
2015 Dec 09
3
persuading licm to do the right thing
A GEP can represent a potentially large tree of instructions.
Seems like all the sub-trees are hidden from optimization;
that is, I never see licm or value numbering doing anything with them.
If I rewrite the GEPs as lots of little adds and multiplies,
then opt will do a better job (I speculate this happens during lowering).
One of the computations that's hidden in the GEP in my example
is
2015 Dec 09
3
persuading licm to do the right thing
I understand that GEPs do not access memory.
They do a (possibly expensive) address calculation,
perhaps adding a few values to a label and leaving the result in a register.
Getting a label into a register is (to me) just like loading a 64-bit
integer value
into a register. It can happen in many places and it can cost a few
instructions
and several bytes. I'd like to see such things commoned
2015 Dec 09
2
persuading licm to do the right thing
When I compile two different modules using
clang -O -S -emit-llvm
I get different .ll files, no surprise.
The first looks like
double *v;
double zap(long n) {
double sum = 0;
for (long i = 0; i < n; i++)
sum += v[i];
return sum;
}
yielding
@v = common global double* null, align 8
; Function Attrs: nounwind readonly uwtable
define double @zap(i64 %n) #0 {
entry:
%cmp4 =
2010 May 27
0
[LLVMdev] TargetDescription string documentation
On Thu, May 27, 2010 at 11:18 AM, Paulo J. Matos <pocmatos at gmail.com> wrote:
> On Thu, May 27, 2010 at 7:09 PM, John Criswell <criswell at uiuc.edu> wrote:
>> I believe what you want is documented here:
>>
>> http://llvm.org/docs/LangRef.html#datalayout
>>
>
> Just a note, since it might be a bug on the backend or documentation.
> It says on the
2012 Nov 13
2
[LLVMdev] loop carried dependence analysis?
Hi all,
Unfortunately, all my Hunks are failed when I apply : patch -p1 < da.patch
command.
The problem might be due to the fact that da.patch file was created against
revision 167549, but I am on revision 167719 (I believe the most recent
one).
I am not sure if this cause the problem ? But Preston may I ask you to
generate the patch file against revison 167719 ?
Thanks in advance.
On
2012 Nov 13
2
[LLVMdev] loop carried dependence analysis?
Erkan, you're right. Sorry about that.
Attached is the most recent version.
Preston
Hi Preston,
> I am trying to use DA as well. I used your example and commands that you
> wrote in order to get DA information.
> However, it does not report any dependence info.
> I am wondering whether your local copy differs from the one on the
> repository ?
> Thanks.
> Erkan.
2012 Nov 13
0
[LLVMdev] loop carried dependence analysis?
Preston, thanks for the explanation and patch. Now it's printing the
direction and distance values.
On Tue, Nov 13, 2012 at 12:22 PM, Preston Briggs
<preston.briggs at gmail.com>wrote:
> Erkan, you're right. Sorry about that.
> Attached is the most recent version.
>
> Preston
>
>
>
> Hi Preston,
>> I am trying to use DA as well. I used your example
2012 Nov 14
0
[LLVMdev] loop carried dependence analysis?
On 13.11.2012, at 10:46, erkan diken <erkandiken at gmail.com> wrote:
Hi all,
Unfortunately, all my Hunks are failed when I apply : patch -p1 < da.patch
command.
The problem might be due to the fact that da.patch file was created against
revision 167549, but I am on revision 167719 (I believe the most recent
one).
I am not sure if this cause the problem ? But Preston may I ask you to
2012 Nov 02
2
[LLVMdev] DependenceAnalysis and PR14241
On 11/02/2012 11:02 AM, Hal Finkel wrote:
> ----- Original Message -----
>> From: "Tobias Grosser" <tobias at grosser.es>
>> To: "preston briggs" <preston.briggs at gmail.com>
>> Cc: "Benjamin Kramer" <benny.kra at gmail.com>, "LLVM Developers Mailing List" <llvmdev at cs.uiuc.edu>
>> Sent: Friday, November
2018 Sep 11
2
linear-scan RA
The phi instruction is irrelevant; just the way I think about things.
The question is if the allocator believes that t0 and t2 interfere.
Perhaps the coalescing example was too simple.
In the general case, we can't coalesce without a notion of interference.
My worry is that looking at interference by ranges of instruction numbers
leads to inaccuracies when a range is introduced by a copy.
2012 Oct 03
3
[LLVMdev] Does LLVM optimize recursive call?
On Wed, Oct 3, 2012 at 10:15 AM, Matthieu Moy
<Matthieu.Moy at grenoble-inp.fr> wrote:
> Preston Briggs <preston.briggs at gmail.com> writes:
>> Think about costs asymptotically; that's what matters. Calls and
>> returns require constant time, just like addition and multiplication.
>
> Constant time, but not necessarily constant memory.
>
> Deep recursion
2018 Sep 11
2
linear-scan RA
Yes, I quite liked the things I've read about the PBQP allocator.
Given what the hardware folks have to go through to get 1% improvements in
scalar code,
spending 20% (or whatever) compile time (under control of a flag) seems
like nothing.
And falling back on "average code" is a little disingenuous.
People looking for performance don't care about average code;
they care about
2012 Nov 02
0
[LLVMdev] DependenceAnalysis and PR14241
Here's the current code (abstracted a bit)
const Instruction *Src,
const Instruction *Dst,
// make sure they are loads and stores, then
const Value *SrcPtr = getPointerOperand(Src); // hides a little
casting, then Src->getPointerOperand
const Value *DstPtr = getPointerOperand(Dst); // ditto
// see how underlying objects alias, then
const GEPOperator *SrcGEP =
2018 Sep 11
2
linear-scan RA
> On Sep 10, 2018, at 5:25 PM, Matthias Braun <mbraun at apple.com> wrote:
>
>
>
>> On Sep 10, 2018, at 5:11 PM, Preston Briggs <preston.briggs at gmail.com <mailto:preston.briggs at gmail.com>> wrote:
>>
>> The phi instruction is irrelevant; just the way I think about things.
>> The question is if the allocator believes that t0 and t2
2018 Sep 11
2
linear-scan RA
Hi,
Using Chaitin's approach, removing a copy via coalescing could expose more
opportunities for coalescing.
So he would iteratively rebuild the interference graph and check for more
opportunities.
Chaitin was also careful to make sure that the source and destination of a
copy didn't interfere unnecessarily (because of the copy alone);
that is, his approach to interference was very