similar to: Questions on LLVM vectorization diagnostics

Displaying 20 results from an estimated 8000 matches similar to: "Questions on LLVM vectorization diagnostics"

2016 Aug 25
2
Questions on LLVM vectorization diagnostics
Hi, Gerolf. We've been a bit quiet for some time. After listening to feedback on the project internally and externally, we decided to take a more generally accepted community development model ---- building up through a collection of small incremental changes ---- than trying to make a big step forward. That change of course took a bit of time, but we are getting close to the first NFC patch
2016 Jun 23
4
Questions on LLVM vectorization diagnostics
Dear LLVM Community, I am D Tharun Kumar, masters student at Indian Institute of Technology Hyderabad, working in a team to improve current vectorizer in LLVM. As an initial study, we are studying various benchmarks to analyze and compare vectorizing capabilities of LLVM, GCC and ICC. We found that vectorization remarks given by LLVM are vague and brief, comparatively GCC and ICC are giving
2016 Oct 10
2
On Loop Distribution pass
> On Oct 10, 2016, at 2:50 PM, Hal Finkel <hfinkel at anl.gov> wrote: > > > From: "Dangeti Tharun kumar via llvm-dev" <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> > To: llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> > Cc: "Santanu Das" <cs15mtech11018 at iith.ac.in <mailto:cs15mtech11018 at
2016 Aug 30
2
Questions on LLVM vectorization diagnostics
Hi Hideki, Thanks for the interesting writeup! > On Aug 27, 2016, at 7:15 AM, Renato Golin via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > On 25 August 2016 at 05:46, Saito, Hideki via llvm-dev > <llvm-dev at lists.llvm.org> wrote: >> Now, I have one question. Suppose we'd like to split the vectorization decision as an Analysis pass and vectorization
2016 Oct 09
3
On Loop Distribution pass
Dear community, Our team at IITH have been experimenting with loop-distribution pass in LLVM. We see the following results on few benchmarks. clang -O3 -mllvm -enable-loop-distribute -Rpass=loop-distribute file.c clang -O3 -mllvm -enable-loop-distribute -Rpass-analysis=loop-distribute file.c TORCH
2019 Sep 27
3
Question on target-features
Ugh, that would be a “yes” then… -- Krzysztof Parzyszek kparzysz at quicinc.com<mailto:kparzysz at quicinc.com> AI tools development From: llvm-dev <llvm-dev-bounces at lists.llvm.org> On Behalf Of Krzysztof Parzyszek via llvm-dev Sent: Friday, September 27, 2019 10:05 AM To: Dangeti Tharun kumar <cs15mtech11002 at iith.ac.in>; llvm-dev at lists.llvm.org Subject: [EXT] Re:
2019 Jan 07
2
[Xray] Help with Xray
On Mon, Jan 7, 2019 at 3:21 PM Dean Michael Berris <dean.berris at gmail.com> wrote: > On Mon, Jan 7, 2019 at 8:43 PM Dangeti Tharun kumar > <cs15mtech11002 at iith.ac.in> wrote: > > > > Hi Dean, > > > > I have tried with -instr-map-1 and -instr-map-2, it didn't work. > > > > Yeah, I'm looking through the code and it looks like
2019 Jan 07
2
[Xray] Help with Xray
Hi Dean, I have tried with -instr-map-1 and -instr-map-2, it didn't work. Is there a way to find the function name from the identifier? -DTharun On Mon, Jan 7, 2019 at 2:29 PM Dean Michael Berris <dean.berris at gmail.com> wrote: > Hi Dangeti, > > That's interesting -- can you try providing both `-instr-map-1=` and > `-instr-map-2=` even though they're the same
2018 Nov 02
2
XMMs unused
On Fri, Nov 2, 2018 at 3:31 PM Anton Korobeynikov <anton at korobeynikov.info> wrote: > > Yes, I am compiling for linux system. > > So the RA will not consider assigning a scratch register to a live range > crossing function call, though it may reduce spills? > Well, it has to spill the register – otherwise it could be clobbered by a > call. May be, I haven't
2019 Sep 27
2
Question on target-features
Hi, In "target-features" list in LLVM-IR, there are "+feature", "-feature". My question is, does "-feature" is equivalent to not specifying a feature at all? For example: *attributes #0 = { "target-cpu"="znver2" "target-features"="+avx -avx2" }* Wheather it is equalent to omitting the avx2 from list? *attributes #0
2019 Jun 24
4
RFC: Interface user provided vector functions with the vectorizer.
@Xinmin, Saito: If Clang/the frontend generates the version there is no problem, or is there? The frontend knows about the original source type and it's ABI specific lowering already. @Francesco, we should even consider putting the generating capabilities outside of the OpenMP code generation (in the future). That could allow easier reuse by other frontends. Get Outlook for
2016 May 18
2
Working on FP SCEV Analysis
>What situations are they common in? ICC Vectorizer made a paradigm shift a while ago. If there aren’t a clear reason why something can’t be vectorized, we should try our best to vectorize. The rest is a performance modeling (and priority to implement) question, not a capability question. We believe this is a good paradigm to follow in a vectorizer development. It was a big departure from
2018 Jul 02
8
[RFC][VECLIB] how should we legalize VECLIB calls?
On 07/02/2018 04:33 PM, Saito, Hideki wrote: > >   > > >It may not be a full solution for the problems you're trying to solve > >   > > If we are inventing a new solution, I’d like it also to solve OpenMP > declare simd legalization issue. If a small extension of existing scheme > > works for mathlib only, I’m happy to take that and discuss OpenMP >
2019 Jun 24
2
RFC: Interface user provided vector functions with the vectorizer.
>Thank you everybody for their input, and for your patience. This is proving harder than expected! :) Thank you for doing the hard part of the work. Hideki -----Original Message----- From: Francesco Petrogalli [mailto:Francesco.Petrogalli at arm.com] Sent: Monday, June 24, 2019 11:26 AM To: Saito, Hideki <hideki.saito at intel.com> Cc: Doerfert, Johannes <jdoerfert at anl.gov>;
2019 Jun 24
2
RFC: Interface user provided vector functions with the vectorizer.
For example, Type 2 case, scalar-foo used call by value while vector-foo used call by ref. The question Johannes is asking is whether we can decipher that after the fact, only by looking at the two function signatures, or need some more info (what kind, what's minimal)? I think we need to list up cases of interest, and for each vector ABI of interest, we need to work on the requirements and
2018 Jul 02
2
[RFC][VECLIB] how should we legalize VECLIB calls?
It may not be a full solution for the problems you're trying to solve, but I don't know why adding to include/llvm/CodeGen/RuntimeLibcalls.def is a problem in itself. Certainly, it's a mess that could be organized, especially so we're not repeating everything for each data type as we do right now. So yes, I think that would allow us to remove the VecLib mappings because we are
2016 Jun 30
0
[Proposal][RFC] Strided Memory Access Vectorization
One common concern raised for cases where Loop Vectorizer generate bigger types than target supported: Based on VF currently we check the cost and generate the expected set of instruction[s] for bigger type. It has two challenges for bigger types cost is not always correct and code generation may not generate efficient instruction[s]. Probably can depend on the support provided by below RFC by
2018 Jul 04
2
[RFC][VECLIB] how should we legalize VECLIB calls?
Hi, On 4 July 2018 at 07:42, Nema, Ashutosh via llvm-dev < llvm-dev at lists.llvm.org> wrote: > + llvm-dev > > -----Original Message----- > From: Nema, Ashutosh > Sent: Wednesday, July 4, 2018 12:12 PM > To: Hal Finkel <hfinkel at anl.gov>; Saito, Hideki <hideki.saito at intel.com>; > Sanjay Patel <spatel at rotateright.com>; mzolotukhin at apple.com
2018 Jul 02
2
[RFC][VECLIB] how should we legalize VECLIB calls?
Adding to Ashutosh's comments, We are also interested in making LLVM generate vector math library calls that are available with glibc (version > 2.22). reference: https://sourceware.org/glibc/wiki/libmvec Using the example case given in the reference, we found there are 2 vector versions for "sin" (4 X double) with same VF namely _ZGVcN4v_sin (avx) version and _ZGVdN4v_sin
2016 Jun 18
2
[Proposal][RFC] Strided Memory Access Vectorization
>Vectorizer's output should be as clean as vector code can be so that analyses and optimizers downstream can >do a great job optimizing. Guess I should clarify this philosophical position of mine. In terms of vector code optimization that complicates the output of vectorizer: If vectorizer is the best place to perform the optimization, it should do so. This includes the cases like