similar to: [LLVMdev] Adding masked vector load and store intrinsics

Displaying 20 results from an estimated 20000 matches similar to: "[LLVMdev] Adding masked vector load and store intrinsics"

2014 Oct 27
4
[LLVMdev] Adding masked vector load and store intrinsics
we just follow a common recommendation to start with intrinsics: http://llvm.org/docs/ExtendingLLVM.html - Elena From: Owen Anderson [mailto:resistor at mac.com] Sent: Sunday, October 26, 2014 23:57 To: Demikhovsky, Elena Cc: llvmdev at cs.uiuc.edu; dag at cray.com Subject: Re: [LLVMdev] Adding masked vector load and store intrinsics What is the motivation for using intrinsics
2014 Oct 24
2
[LLVMdev] Adding masked vector load and store intrinsics
> Why can't we represent the loads as select(mask, load(addr), passthru)? This suggests masked-off lanes are free to speculatively load from memory. Whereas proposed semantics is that: > The addressed memory will not be touched for masked-off lanes. In > particular, if all lanes are masked off no address will be accessed. Ayal. -----Original Message----- From: llvmdev-bounces at
2014 Oct 24
3
[LLVMdev] Adding masked vector load and store intrinsics
> For the loads, I'm must less sure. Why can't we represent the loads as select(mask, load(addr), passthru)? It is true, that the load might get separated from the select so that isel might not see it (because isel if basic-block local), but we can add some code in CodeGenPrep to fix that for targets on which it is useful to do so (which is a more-general solution than the intrinsic
2014 Oct 28
2
[LLVMdev] Adding masked vector load and store intrinsics
Many oveloaded intrinsics may be replaced with instructions - fabs or fma or sqrt. Chandler will probably explain the criteria. What the diff between fma and fadd? Or fptrunc and fabs? A new instruction like %a = loadm <4 x i32>* %addr, <4 x i32> %passthru, i32 4, <4 x i1>%mask is possible, but may be not very useful for most of targets. So we start from intrinsics. -
2014 Oct 24
6
[LLVMdev] Adding masked vector load and store intrinsics
> On Oct 24, 2014, at 10:57 AM, Adam Nemet <anemet at apple.com> wrote: > > On Oct 24, 2014, at 4:24 AM, Demikhovsky, Elena <elena.demikhovsky at intel.com <mailto:elena.demikhovsky at intel.com>> wrote: > >> Hi, >> >> We would like to add support for masked vector loads and stores by introducing new target-independent intrinsics. The loop
2016 Sep 25
5
RFC: New intrinsics masked.expandload and masked.compressstore
| |Hi Elena, | |Technically speaking, this seems straightforward. | |I wonder, however, how target-independent this is in a practical |sense; will there be an efficient lowering when targeting any other |ISA? I don't want to get into the territory where, because the |vectorizer is supposed to be architecture independent, we need to |add target-independent intrinsics for all
2016 Sep 26
2
RFC: New intrinsics masked.expandload and masked.compressstore
| |How would this work in this case? The result would need to affect the |legality and cost of the memory instruction. From your poster, it looks |like we're talking about loops with constructs like this: | |for (i =0; i < N; i++) { | if (topVal > b[i]) { | *dst = a[i]; | dst++; | } |} | |is this loop vectorizable at all without these constructs? Good
2016 Sep 19
2
RFC: New intrinsics masked.expandload and masked.compressstore
Hi all, AVX-512 ISA introduces new vector instructions VCOMPRESS and VEXPAND in order to allow vectorization of the following loops with two specific types of cross-iteration dependencies: Compress: for (int i=0; i<N; ++i) If (t[i]) *A++ = expr; Expand: for (i=0; i<N; ++i) If (t[i]) X[i] = *A++; else
2014 Oct 25
2
[LLVMdev] Adding masked vector load and store intrinsics
> So %passthrough can *only* be undef or zeroinitializer? No, it can be any value including undef and zeroinitializer. We considered, while designing, zero and merge semantics and decided that merge semantics is better because it covers zero semantics if you use zeroinitializer in the %paththru. - Elena -----Original Message----- From: dag at cray.com [mailto:dag at cray.com] Sent:
2014 Dec 18
8
[LLVMdev] Indexed Load and Store Intrinsics - proposal
Hi, Recent Intel architectures AVX-512 and AVX2 provide vector gather and/or scatter instructions. Gather/scatter instructions allow read/write access to multiple memory addresses. The addresses are specified using a base address and a vector of indices. We'd like Vectorizers to tap this functionality, and propose to do so by introducing new intrinsics: VectorValue = @llvm.sindex.load
2014 Dec 21
3
[LLVMdev] Indexed Load and Store Intrinsics - proposal
On 12/18/2014 11:56 AM, dag at cray.com wrote: > "Demikhovsky, Elena" <elena.demikhovsky at intel.com> writes: > >> Semantics: >> For i=0,1,…,N-1: if (Mask[i]) {*(BaseAddr + VectorOfIndices[i]*Scale) >> = VectorValue[i];} >> VectorValue: any float or integer vector type. >> BaseAddr: a pointer; may be zero if full address is placed in the
2015 Mar 15
2
[LLVMdev] Indexed Load and Store Intrinsics - proposal
hi Hao, I started to upstream and the second patch is stalled under review now. - Elena -----Original Message----- From: Hao Liu [mailto:haoliuts at gmail.com] Sent: Friday, March 13, 2015 05:56 To: Demikhovsky, Elena Cc: llvmdev at cs.uiuc.edu Subject: Re: [LLVMdev] Indexed Load and Store Intrinsics - proposal Hi Elena, I think such intrinsics are very useful. Do you have any plan to
2014 Dec 24
2
[LLVMdev] Indexed Load and Store Intrinsics - proposal
----- Original Message ----- > From: "Ayal Zaks" <ayal.zaks at intel.com> > To: "Philip Reames" <listmail at philipreames.com>, dag at cray.com, "Elena Demikhovsky" <elena.demikhovsky at intel.com> > Cc: "Robert Khasanov" <robert.khasanov at intel.com>, llvmdev at cs.uiuc.edu > Sent: Monday, December 22, 2014 8:05:43 AM
2014 Oct 24
4
[LLVMdev] Adding masked vector load and store intrinsics
Hal Finkel <hfinkel at anl.gov> writes: > For the loads, I'm must less sure. Why can't we represent the loads as > select(mask, load(addr), passthru)? Because that does not specify the correct semantics. This formulation expects the load to happen before the mask is applied. The load could trap. The operation needs to be presented as an atomic unit. The same problem
2014 Oct 26
2
[LLVMdev] Masked vector intrinsics and name mangling
> On Oct 26, 2014, at 8:22 AM, Hal Finkel <hfinkel at anl.gov> wrote: > > ----- Original Message ----- >> From: "Elena Demikhovsky" <elena.demikhovsky at intel.com> >> To: "Hal Finkel" <hfinkel at anl.gov> >> Cc: llvmdev at cs.uiuc.edu >> Sent: Sunday, October 26, 2014 10:17:49 AM >> Subject: RE: [LLVMdev] Masked vector
2014 Oct 26
2
[LLVMdev] Masked vector intrinsics and name mangling
Hal, thank you for your opinion. I just was confused when I saw so long name " llvm.masked.load.v16i32.p0i32.v16i32.i32.v16i1" . If we stay with a short name, we do a step towards instruction form. - Elena -----Original Message----- From: Hal Finkel [mailto:hfinkel at anl.gov] Sent: Sunday, October 26, 2014 17:06 To: Demikhovsky, Elena Cc: llvmdev at cs.uiuc.edu Subject: Re:
2014 Dec 24
2
[LLVMdev] Indexed Load and Store Intrinsics - proposal
----- Original Message ----- > From: "Xinmin Tian" <xinmin.tian at intel.com> > To: "Hal Finkel" <hfinkel at anl.gov>, "Ayal Zaks" <ayal.zaks at intel.com> > Cc: dag at cray.com, "Robert Khasanov" <robert.khasanov at intel.com>, llvmdev at cs.uiuc.edu > Sent: Tuesday, December 23, 2014 7:36:44 PM > Subject: RE:
2014 Oct 26
2
[LLVMdev] Masked vector intrinsics and name mangling
Hi, The proposed masked vector intrinsics are overloaded - one intrinsic ID for multiple types. After name mangling it will look like: %res = call <16 x i32> @llvm.masked.load.v16i32.p0i32.v16i32.i32.v16i1(i32* %addr, <16 x i32>%passthru, i32 4, <16 x i1> %mask) 6 types x 3 vector sizes = 18 names for one operation I propose to remove name mangling from these intrinsics: %res
2016 Mar 04
2
Fwd: [PATCH] D17497: Support arbitrary address space for intrinsics
Per my previous email, I have just signed off on Artur's original patch. Philip On 03/02/2016 11:21 AM, Philip Reames via llvm-dev wrote: > Elena, > > I'd like to propose that we move forward withArtur's original patch > <http://reviews.llvm.org/D17270> and separate the discussion of how we > might change our intrinsic naming scheme. Artur's patch is
2016 Aug 07
2
enabling interleaved access loop vectorization
We checked the gathered data again. All regressions that we see are in 32-bit mode. The 64-bit mode looks good overall. - Elena From: Michael Kuperstein [mailto:mkuper at google.com] Sent: Saturday, August 06, 2016 02:56 To: Renato Golin <renato.golin at linaro.org> Cc: Demikhovsky, Elena <elena.demikhovsky at intel.com>; Matthew Simpson <mssimpso at codeaurora.org>;