Displaying 20 results from an estimated 5000 matches similar to: "[LLVMdev] Vector promotions for calling conventions"
2010 Aug 31
0
[LLVMdev] "equivalent" .ll files diverge after optimizations are applied
On Aug 31, 2010, at 1:21 PMPDT, Argyrios Kyrtzidis wrote:
>
> Just to be clear, are you saying that the fact that, after using llc
> on the second IR, the produced asm is using MM registers, indicates
> a bug ?
Yes. It's not immediately obvious whether it's in the opt or llc,
though.
Chris was doing work involving <2 x float> and may know about this.
>
2010 Aug 31
0
[LLVMdev] "equivalent" .ll files diverge after optimizations are applied
Using MM registers is wrong unless the user has specifically asked for
it, which doesn't seem to be the case here.
In the awesome MMX architecture, touching an MM register makes
subsequent x87 operations fail unless an EMMS instruction is issued
first; none of the compilers here are smart enough to insert EMMS
instructions in the right places, so the only safe thing is not to use
2010 Aug 31
2
[LLVMdev] "equivalent" .ll files diverge after optimizations are applied
Here's the optimized versions:
$ opt -std-compile-opts unopt-pass.ll -o - | llvm-dis -o -
[...]
define %3 @_ZN7WebCore15GraphicsContext19roundToDevicePixelsERKNS_9FloatRectE(%"class.WebCore::GraphicsContext"* %this, %"struct.WebCore::FloatRect"* %rect) nounwind ssp align 2 {
%roundedOrigin = alloca %"class.WebCore::FloatSize", align 4 ;
2014 Jul 23
4
[LLVMdev] the clang 3.5 loop optimizer seems to jump in unintentional for simple loops
the clang 3.5 loop optimizer seems to jump in unintentional for simple loops
the very simple example
----
const int SIZE = 3;
int the_func(int* p_array)
{
int dummy = 0;
#if defined(ITER)
for(int* p = &p_array[0]; p < &p_array[SIZE]; ++p) dummy += *p;
#else
for(int i = 0; i < SIZE; ++i) dummy += p_array[i];
#endif
return dummy;
}
int main(int argc, char** argv)
{
2016 Jun 29
0
avx512 JIT backend generates wrong code on <4 x float>
Hi Frank,
I recommend trying trunk LLVM. AVX-512 development has been very active recently.
-Hal
----- Original Message -----
> From: "Frank Winter via llvm-dev" <llvm-dev at lists.llvm.org>
> To: "LLVM Dev" <llvm-dev at lists.llvm.org>
> Sent: Wednesday, June 29, 2016 2:41:39 PM
> Subject: [llvm-dev] avx512 JIT backend generates wrong code on <4
2016 Jun 30
1
avx512 JIT backend generates wrong code on <4 x float>
Hi Hal!
Thanks, but unfortunately it didn't help. The exact same assembler
instructions are generated for both 3.8 (yesterday) and trunk (from today).
So, this really looks like a bug.
Best,
Frank
On 06/29/2016 03:48 PM, Hal Finkel wrote:
> Hi Frank,
>
> I recommend trying trunk LLVM. AVX-512 development has been very active recently.
>
> -Hal
>
> ----- Original
2010 Aug 31
5
[LLVMdev] "equivalent" .ll files diverge after optimizations are applied
Hi,
I've attached 2 .ll files which are supposed to be equivalent but 'unopt-fail.ll' causes a crash in webkit's test suite while 'unopt-pass.ll' does not. I can't give more details about the crash, when I run the crashing test it in isolation it passes, when I run the full suite it crashes; it boggles the mind.
Below I provide the optimized asm that is produced from
2016 Jun 29
2
avx512 JIT backend generates wrong code on <4 x float>
Hi!
When compiling the attached module with the JIT engine on an Intel KNL I
see wrong code getting emitted. I attach a complete exploit program
which shows the bug in LLVM 3.8. It loads and JIT compiles the module
and prints the assembler. I stumbled on this since the result of an
actual calculation was wrong. So, it's not only the text version of the
assembler also the machine
2016 Apr 01
2
RFC: A proposal for vectorizing loops with calls to math functions using SVML
RFC: A proposal for vectorizing loops with calls to math functions using SVML (short
vector math library).
=========
Overview
=========
Very simply, SVML (Intel short vector math library) functions are vector variants of
scalar math functions that take vector arguments, apply an operation to each
element, and store the result in a vector register. These vector variants can be
generated by the
2016 Apr 04
2
RFC: A proposal for vectorizing loops with calls to math functions using SVML
Hi Sanjay,
For sincos calls, I’m currently just going through isTriviallyVectorizable(), which was good enough to get things working so that I could test the translation. I don’t see why this cannot be changed to use addVectorizableFunctionsFromVecLib(). The other functions that I’m working with are already vectorized using the loop pragma. Those include sin, cos, exp, log, and pow.
From: Sanjay
2016 Jun 23
2
AVX512 instruction generated when JIT compiling for an avx2 architecture
With LLVM 3.8 the JIT compiler engine generates an AVX512 instruction
although I target an 'avx2' CPU (intel Core I7).
I just downloaded the most recent 3.8 and still it happens.
It happens with this input module:
target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
define void @module_cFFEMJ(i64 %lo, i64 %hi, i64 %myId, i1 %ordered, i64
%start, i32* noalias align 32
2016 Jun 23
2
AVX512 instruction generated when JIT compiling for an avx2 architecture
On 06/23/2016 12:56 PM, Craig Topper wrote:
> Can you check what value "getHostCPUName" returned?
getHostCPUName() = skylake
>
> On Thu, Jun 23, 2016 at 9:53 AM, Frank Winter via llvm-dev
> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>
> With LLVM 3.8 the JIT compiler engine generates an AVX512
> instruction although I
2011 Oct 17
0
[LLVMdev] LLVM Build Bot failure on llmv-x86_64-ubuntu
Looks like pinsr is not being generated on llvm-x86_64-ubuntu...
jabbey at davinci:~$ /home/jabbey/src/osuosl/buildbot/sandbox/llvm-x86_64-ubuntu/llvm-x86_64-ubuntu/llvm/Debug+Asserts/bin/llc < /home/jabbey/src/osuosl/buildbot/sandbox/llvm-x86_64-ubuntu/llvm-x86_64-ubuntu/llvm/test/CodeGen/X86/mmx-pinsrw.ll -mtriple=x86_64-linux -mattr=+mmx,+sse2
produces:
.file "<stdin>"
2016 Aug 05
2
enabling interleaved access loop vectorization
Regarding InterleavedAccessPass - sure, but proper strided/interleaved
access optimization ought to have a positive impact even without target
support.
Case in point - Hal enabled it on PPC last September. An important
difference vs. x86 seems to be that arbitrary shuffles are cheap on PPC,
but, as I said below, I hope we can enable it on x86 with a conservative
cost function, and still get
2015 Nov 25
2
[RFC] Introducing a vector reduction add instruction.
On Wed, Nov 25, 2015 at 2:32 PM, Hal Finkel <hfinkel at anl.gov> wrote:
> Hi Cong,
>
> After reading the original RFC and this update, I'm still not entirely sure I understand the semantics of the flag you're proposing to add. Does it having something to do with the ordering of the reduction operations?
The flag is only useful for vectorized reduction for now. I'll give
2013 Oct 15
0
[LLVMdev] [llvm-commits] r192750 - Enable MI Sched for x86.
I should mention a couple of useful self-explanatory LLVM flags for triage:
-enable-misched=false
-verify-misched
-Andy
On Oct 15, 2013, at 4:43 PM, Eric Christopher <echristo at gmail.com> wrote:
> Grats on the work, a long time coming!
>
> Beware the incoming register allocation bugs ;)
>
> -eric
>
> On Tue, Oct 15, 2013 at 4:33 PM, Andrew Trick <atrick at
2016 Aug 05
3
enabling interleaved access loop vectorization
Hi Michael,
Sometime back I did some experiments with interleave vectorizer and did not found any degrade,
probably my tests/benchmarks are not extensive enough to cover much.
Elina is the right person to comment on it as she already experienced cases where it hinders performance.
For interleave vectorizer on X86 we do not have any specific costing, it goes to BasicTTI where the costing is not
2015 Jul 29
2
[LLVMdev] x86-64 backend generates aligned ADDPS with unaligned address
When I compile attached IR with LLVM 3.6
llc -march=x86-64 -o f.S f.ll
it generates an aligned ADDPS with unaligned address. See attached f.S,
here an extract:
addq $12, %r9 # $12 is not a multiple of 4, thus for
xmm0 this is unaligned
xorl %esi, %esi
.align 16, 0x90
.LBB0_1: # %loop2
2015 Nov 19
5
[RFC] Introducing a vector reduction add instruction.
After some attempt to implement reduce-add in LLVM, I found out a
easier way to detect reduce-add without introducing new IR operations.
The basic idea is annotating phi node instead of add (so that it is
easier to handle other reduction operations). In PHINode class, we can
add a flag indicating if the phi node is a reduction one (the flag can
be set in loop vectorizer for vectorized phi nodes).
2015 Nov 25
2
[RFC] Introducing a vector reduction add instruction.
----- Original Message -----
> From: "Xinliang David Li" <davidxl at google.com>
> To: "Cong Hou" <congh at google.com>
> Cc: "Hal Finkel" <hfinkel at anl.gov>, "llvm-dev" <llvm-dev at lists.llvm.org>
> Sent: Wednesday, November 25, 2015 5:17:58 PM
> Subject: Re: [llvm-dev] [RFC] Introducing a vector reduction add