similar to: [RFC] IR-level Region Annotations

Displaying 20 results from an estimated 10000 matches similar to: "[RFC] IR-level Region Annotations"

2017 Jan 19
3
[RFC] IR-level Region Annotations
On Thu, Jan 19, 2017 at 1:12 PM, Mehdi Amini <mehdi.amini at apple.com> wrote: > > On Jan 19, 2017, at 12:04 PM, Daniel Berlin <dberlin at dberlin.org> wrote: > > > > On Thu, Jan 19, 2017 at 11:46 AM, Mehdi Amini via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> >> > On Jan 19, 2017, at 11:36 AM, Adve, Vikram Sadanand via llvm-dev
2017 Jan 19
2
[RFC] IR-level Region Annotations
On Thu, Jan 19, 2017 at 11:46 AM, Mehdi Amini via llvm-dev < llvm-dev at lists.llvm.org> wrote: > > > On Jan 19, 2017, at 11:36 AM, Adve, Vikram Sadanand via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > > > Hi Johannes, > > > >> I am especially curious where you get your data from. Tapir [0] (and to > >> some degree PIR [1]) have
2017 Jan 20
2
[RFC] IR-level Region Annotations
On 01/19/2017 03:36 PM, Mehdi Amini via llvm-dev wrote: > >> On Jan 19, 2017, at 1:32 PM, Daniel Berlin <dberlin at dberlin.org >> <mailto:dberlin at dberlin.org>> wrote: >> >> >> >> On Thu, Jan 19, 2017 at 1:12 PM, Mehdi Amini<mehdi.amini at apple.com >> <mailto:mehdi.amini at apple.com>>wrote: >> >> >>>
2017 Jan 20
9
[RFC] IR-level Region Annotations
Hi Sanjoy, Yes, that's exactly what we have been looking at recently here, but the region tags seem to make it possible to express the control flow as well, so I think we could start with reg ions+metadata, as Hal and Xinmin proposed, and then figure out what needs to be first class instructions. --Vikram Adve > On Jan 19, 2017, at 11:03 PM, Sanjoy Das <sanjoy at
2017 Jan 28
3
[RFC][PIR] Parallel LLVM IR -- Stage 0 -- IR extension
Dear all, This RFC proposes three new LLVM IR instructions to express high-level parallel constructs in a simple, low-level fashion. For this first stage we prepared two commits that add the proposed instructions and a pass to lower them to obtain sequential IR. Both patches have be uploaded for review [1, 2]. The latter patch is very simple and the former consists of almost only mechanical
2017 Mar 08
5
(no subject)
<mehdi.amini at apple.com>, Bcc: Subject: Re: [llvm-dev] [RFC][PIR] Parallel LLVM IR -- Stage 0 -- IR extension Reply-To: In-Reply-To: <20170224221713.GA931 at arch-linux-jd.home> Ping. PS. Are there actually people interested in this? We will continue working anyway but it might not make sense to put it on reviews and announce it on the ML if nobody cares. On 02/24,
2017 Mar 08
3
(no subject)
> On Mar 8, 2017, at 10:55 AM, Mehdi Amini <mehdi.amini at apple.com> wrote: > >> >> On Mar 8, 2017, at 5:36 AM, Johannes Doerfert <doerfert at cs.uni-saarland.de> wrote: >> >> <mehdi.amini at apple.com>, >> Bcc: >> Subject: Re: [llvm-dev] [RFC][PIR] Parallel LLVM IR -- Stage 0 -- IR extension >> Reply-To: >>
2017 Mar 08
3
(no subject)
A quick update, we have been looking through all LLVM passes to identify the impact of "IR-region annotation", and interaction issues with the rest of LoopOpt and scalarOpt, e.g. interaction with vectorization when you have schedule(simd:guided: 64). What are the common properties for optimizer to know on IR-region annotations. We have our implementation working from O0, O1, O2 to O3.
2017 Mar 08
4
(no subject)
".... the problem Mehdi pointed out regarding the missed initializations of array elements, did you comment on that one yet?" What is the initializations of array elements question? I don't remember this question. Please refresh my memory. Thanks. I thought Mehdi's question is more about what are attributes needed for these IR-annotation for other LLVM pass to understand and
2017 Mar 08
2
(no subject)
On 03/08/2017 12:44 PM, Johannes Doerfert wrote: > I don't know who pointed it out first but Mehdi made me aware of it at > CGO. I try to explain it shortly. > > Given the following situation (in pseudo code): > > alloc A[100]; > parallel_for(i = 0; i < 100; i++) > A[i] = f(i); > > acc = 1; > for(i = 0; i < 100; i++) > acc = acc *
2017 Mar 08
2
(no subject)
The IR-region annotation we proposed is as below, there is no @llvm.parallel.for.iterator()..... There is no change to loop CFG. alloc A[100]; %t = call token @llvm.region.entry()["parallel.for"()] for(i = 0; i < 100; i++) { a[i] = f(i); } @llvm.region.exit(%t)() ["end.parallel.for"()] Xinmin -----Original Message----- From: Johannes Doerfert
2017 Mar 08
3
[RFC][PIR] Parallel LLVM IR -- Stage 0 --
I assume the referring case is something like below, right? #pragma omp parallel num_threads(n) { #pragma omp critical { x = x + 1; } } If that is the case, the programmer is already writing the code that is not "serial equivalent". Our representation for parallelizer is %t = @llvm.region.entry()["omp.parallel"(),
2017 Mar 08
2
[RFC][PIR] Parallel LLVM IR -- Stage 0 --
> On Mar 8, 2017, at 11:50 AM, Hal Finkel <hfinkel at anl.gov> wrote: > > > On 03/08/2017 01:24 PM, Tian, Xinmin wrote: >> I assume the referring case is something like below, right? >> >> #pragma omp parallel num_threads(n) >> { >> #pragma omp critical >> { >> x = x + 1; >> } >> }
2017 Jan 21
2
[RFC] IR-level Region Annotations
> On Jan 20, 2017, at 11:17 AM, Tian, Xinmin <xinmin.tian at intel.com> wrote: > >>>>> This means that the optimizer has to be aware of it, I’m missing the magic here? > > This is one option. > > The another option is that, as I mentioned in our LLVM-HPC paper in our implementation. We have a "prepare phase for pre-privatization" can be invoked
2017 Jan 20
5
[RFC] IR-level Region Annotations
> On Jan 20, 2017, at 10:44 AM, Tian, Xinmin via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Sanjoy, the IR would be like something below. It is ok to hoist alloca instruction outside the region. There are some small changes in optimizer to understand region-annotation intrinsic. > > { void main() { > i32* val = alloca i32 > tok =
2017 Feb 01
2
[RFC] IR-level Region Annotations
> On Jan 31, 2017, at 5:38 PM, Tian, Xinmin <xinmin.tian at intel.com> wrote: > >>>>> Ok, but this looks like a “workaround" for your specific use-case, I don’t see how it can scale as a model-agnostic and general-purpose region semantic. > > I would say it is a design trade-off. I’m not sure if we’re talking about the same thing here: my understanding at
2017 Jan 20
3
[RFC] IR-level Region Annotations
Hi, I'm going to club together some responses. I agree that outlining function sub-bodies and passing in the function pointers to said outlined bodies to OpenMP helpers lets us correctly implement the semantics we need. However, unless I severely misunderstood the thread, I thought the key idea was to move *away* from that representation and towards a representation that _allows_
2017 Feb 01
0
[RFC] IR-level Region Annotations
>>>>Ok, but this looks like a “workaround" for your specific use-case, I don’t see how it can scale as a model-agnostic and general-purpose region semantic. I would say it is a design trade-off. Regardless it is a new instruction or an intrinsics with token/tag, it will consist of model-agnostic part and model-non-agnostic part. The package comes with a framework for parsing
2017 Feb 01
0
[RFC] IR-level Region Annotations
Let me try this. You can simply consider the prepare-phase (e.g. pre-privatization) were done in FE (actually a library can be used by multiple FEs at LLVM IR level), the region is run with 1 thread, region annotation (scope, single-entry-single-exit) as memory barrier conservatively for now (instead of checking individual memory dependency, aliasing via tags which is the actual
2017 Feb 01
1
[RFC] IR-level Region Annotations
> On Jan 31, 2017, at 6:48 PM, Tian, Xinmin <xinmin.tian at intel.com> wrote: > > Let me try this. > > You can simply consider the prepare-phase (e.g. pre-privatization) were done in FE (actually a library can be used by multiple FEs at LLVM IR level), the region is run with 1 thread, region annotation (scope, single-entry-single-exit) as memory barrier conservatively