similar to: Suggestions on code generation for SIMD

Displaying 20 results from an estimated 5000 matches similar to: "Suggestions on code generation for SIMD"

2018 Jan 08
0
Suggestions on code generation for SIMD
> On 6 Jan 2018, at 00:26, Linchuan Chen via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hi everyone, > > I'm quite new to LLVM, but am working on a project that might need to generate some SIMD code using LLVM. The SIMD code will be using INTEL MIC intrinsics and I'm not sure about the > steps and tool set that I need to use to generate those. > > I
2018 Jan 08
2
Suggestions on code generation for SIMD
Thanks Amara so much for the info! One more question: what do people usually do if they want to generate vectorized code for some existing c/c++ code? Do they usually do C/C++ source level transformation, or do at LLVM's IR level? I know clang supports auto vectorizations, such as loop vectorization and SLP, but they are not flexible enough if we want to do more custom vectorizations or
2018 Jan 08
0
Suggestions on code generation for SIMD
> On 8 Jan 2018, at 19:41, Linchuan Chen <chenlinc at cse.ohio-state.edu> wrote: > > Thanks Amara so much for the info! > > One more question: what do people usually do if they want to generate vectorized code for some existing c/c++ code? > Do they usually do C/C++ source level transformation, or do at LLVM's IR level? > > I know clang supports auto
2018 Jan 08
2
Suggestions on code generation for SIMD
Thanks Amara very much! I will take a look! On Mon, Jan 8, 2018 at 12:01 PM, Amara Emerson via llvm-dev < llvm-dev at lists.llvm.org> wrote: > On 8 Jan 2018, at 19:41, Linchuan Chen <chenlinc at cse.ohio-state.edu> > wrote: > > Thanks Amara so much for the info! > > One more question: what do people usually do if they want to generate > vectorized code for some
2018 Jan 09
0
Suggestions on code generation for SIMD
> The vast majority of the time people will rely on source level pragmas [1], > LLVM IR is designed to be machine friendly, not something intended for > users to manually edit themselves. You can do it, but it’s tedious and > error prone. If you need more control over the vectorisation than the > pragmas allow, then the C intrinsics are the best choice. >
2018 Jan 10
1
Suggestions on code generation for SIMD
Thanks Serge! This means for every new intrinsic set, a systematic change should be made to LLVM to support the new intrinsic set, right? The change should include frontend change, IR instruction set change, as well as low level code generation changes? On Tue, Jan 9, 2018 at 12:39 AM, serge guelton via llvm-dev < llvm-dev at lists.llvm.org> wrote: > > The vast majority of the
2019 Jun 03
2
[EXT] Re: [RFC][SVE] Supporting SIMD instruction sets with variable vector lengths
Hi Graham, Thanks for your kind explanation. There was internal discussion about it. If possible, can you let me know the Clang/LLVM CodeGen patches for the vector type on phabricator please? I would like to check what kinds of the restrictions the type causes on Clang/LLVM. Thanks, JinGu Kang ________________________________ From: Graham Hunter <Graham.Hunter at arm.com> Sent: 28 May
2019 May 27
2
[EXT] Re: [RFC][SVE] Supporting SIMD instruction sets with variable vector lengths
Hi All, I have read the links from Joel. It seems one of its main focus is vectorization of loop with vector predicate register. I am not sure we need the scalable vector type for it. Let's see a simple example from the white paper. 1 void example01(int *restrict a, const int *b, const int *c, long N) 2 { 3 long i; 4 for (i = 0; i < N; ++i) 5 a[i] = b[i] + c[i]; 6 }
2019 May 24
2
[EXT] Re: [RFC][SVE] Supporting SIMD instruction sets with variable vector lengths
JinGu: I’m not Graham, but you might find the following link a good starting point. https://community.arm.com/developer/tools-software/hpc/b/hpc-blog/posts/technology-update-the-scalable-vector-extension-sve-for-the-armv8-a-architecture The question you ask doesn’t have a short answer. The compiler and the instruction set design work together to allow programs to be compiled without knowing
2017 Jul 06
3
[RFC][SVE] Supporting Scalable Vector Architectures in LLVM IR (take 2)
On 6 July 2017 at 23:13, Chris Lattner <clattner at nondot.org> wrote: >> Yes, as an extension to VectorType they can be manipulated and passed >> around like normal vectors, load/stored directly, phis, put in llvm >> structs etc. Address computation generates expressions in terms vscale >> and it seems to work well. > > Right, that works out through
2016 Nov 30
5
[RFC] Enable "#pragma omp declare simd" in the LoopVectorizer
Dear all, I have just created a couple of differential reviews to enable the vectorisation of loops that have function calls to routines marked with “#pragma omp declare simd”. They can be (re)viewed here: * https://reviews.llvm.org/D27249 * https://reviews.llvm.org/D27250 The current implementation allows the loop vectorizer to generate vector code for source file as: #pragma omp declare
2016 Dec 08
6
[RFC] Enable "#pragma omp declare simd" in the LoopVectorizer
Hi Francesco, a bit more information. GCC veclib is implemented based on GCC VectorABI for declare simd as well. For name mangling, we have to follow certain rules of C/C++ (e.g. prefix needs to _ZVG ....). David Majnemer who is the owner and stakeholder for approval for Clang and LLVM. Also, we need to pay attention to GCC compatibility. I would suggest you look into how GCC VectorABI can
2018 Jan 05
2
RFC: [LV] any objections in moving isLegalMasked* check from Legal to CostModel? (Cleaning up LoopVectorizationLegality)
All, I'm trying to refactor LoopVectorize such that it has better conformance to VPlan vision going forward (http://www.llvm.org/docs/Proposals/VectorizationPlan.html). All VP*Recipe class definitions are now moved to VPlan.h, and I have a patch under review to move LoopVectorizationPlanner class out of LoopVectorize.cpp (https://reviews.llvm.org/D41420). Next thing I'm working on is
2016 Dec 12
0
[RFC] Enable "#pragma omp declare simd" in the LoopVectorizer
Hi Xinmin, I have updated the clang patch using the standard name mangling you suggested - I was not fully aware of the C++ mangling convention “_ZVG”. I am using “D” for 64-bit NEON and “Q” for 128-bit NEON, which makes NEON vector symbols look as follows: _ZVGQN2v__Z1fd _ZVGDN2v__Z1ff _ZVGQN4v__Z1ff Here “Q” means -> NEON 128-bit, “D” means -> NEON 64-bit Please notice that although
2018 Jan 06
2
RFC: [LV] any objections in moving isLegalMasked* check from Legal to CostModel? (Cleaning up LoopVectorizationLegality)
Amara, >I support this direction Thanks for the support. >but are there actually any real world workloads where gather/scatter scalarisation would be worth it, on any micro-architecture? If we don’t have examples and the compile time cost is non-negligible then I think we’d still like to keep the early >bailouts in some form.’ It's not like I have specific application code in
2017 Feb 01
2
RFC: Generic IR reductions
> One that we have had multiple times and the usual consensus is: if it can be represented in plain IR, it must. Adding multiple semantics for the same concept, especially stiff ones like builtins, adds complexity to the optimiser. > Regardless of the merits in this case, builtins should only be introduced IFF there is no other way. So first we should discuss adding it to IR with generic
2017 Dec 30
2
Issues with omp simd
hello, i am trying to optimize omp simd loop as follows int main(int argc, char **argv) { const int size = 1000000; float a[size], b[size],c[size]; #pragma omp simd for (int i=0; i<size; ++i) { c[i]= a[i] + b[i]; } return 0; } i run it using the following command; g++ -O0 --std=c++14 -fopenmp-simd lab.cpp -Iinclude -S -o lab.s
2015 Feb 09
2
[LLVMdev] aarch64 status for generating SIMD instructions
I'm using Fedora 22 and gcc 4.9.2 to run llvm 3.5.1 on an ARM Juno reference box (cortex A53 & A57). I tried compiling some simple functions like dot product and axpy() into assembly to see if any of the SIMD instructions were generated (they weren't). Perhaps I'm missing some compiler flag to enable it. Does anyone know what the status is for aarch64 generating SIMD instructions?
2018 Jan 05
0
RFC: [LV] any objections in moving isLegalMasked* check from Legal to CostModel? (Cleaning up LoopVectorizationLegality)
> On 5 Jan 2018, at 21:01, Saito, Hideki via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > > All, > > I'm trying to refactor LoopVectorize such that it has better conformance to VPlan vision going forward > (http://www.llvm.org/docs/Proposals/VectorizationPlan.html). All VP*Recipe class definitions are now > moved to VPlan.h, and I have a patch under review
2017 Jan 31
4
RFC: Generic IR reductions
+cc Simon who's also interested in reductions for the any_true, all_true predicate vectors. On 31 January 2017 at 20:19, Renato Golin via llvm-dev <llvm-dev at lists.llvm.org> wrote: > Hi Amara, > > We also had some discussions on the SVE side of reductions on the main > SVE thread, but this description is much more detailed than we had > before. > > I don't