similar to: [LLVMdev] SOA / Lane Packing Compilation with LLVM

Displaying 20 results from an estimated 1400 matches similar to: "[LLVMdev] SOA / Lane Packing Compilation with LLVM"

2010 Apr 27
5
[LLVMdev] PTX target for LLVM!
Hey everybody, good news for everyone interested in the PTX backend: We decided to release the current source code under the GPL - you can find the latest tarball here: http://www.prog.uni-saarland.de/projects/anysl You will find the README in the attachment, which should hopefully answer a lot of questions concerning the implementation and the current status. If you have further questions,
2011 Aug 29
0
[LLVMdev] PTX target for LLVM!
Hi everyone, I downloaded the latest version of LLVM PTX backend from http://www.prog.uni-saarland.de/projects/anysl and made the required changes to all the files mentioned in the README. But I get the following error when I compile it. llvm[3]: Compiling PTXBackend.cpp for Release build In file included from PTXBackend.h:70:0, from PTXBackend.cpp:36: PTXPasses.h: In constructor
2012 Oct 05
12
[LLVMdev] LLVM Loop Vectorizer
Hi, We are starting to work on an LLVM loop vectorizer. There's number of different projects that already vectorize LLVM IR. For example Hal's BB-Vectorizer, Intel's OpenCL Vectorizer, Polly, ISPC, AnySL, just to name a few. I think that it would be great if we could collaborate on the areas that are shared between the different projects. I think that refactoring LLVM in away that
2012 Oct 05
0
[LLVMdev] LLVM Loop Vectorizer
I think we should try to abstract the costs of instructions of various targets instead of trying to replicate them exactly. The coarser the costing infrastructure the more robust will be the vectorization pass. Also this eliminates/reduces the need of updating the costing infrastructure as and when new h/w reduces the cost(s) of existing instructions. - Dibyendu -----Original Message----- From:
2012 Jul 23
2
[LLVMdev] Differences and Relationship between VLIW scheduler and VLIW packetizer?
Hi, I notice that there exist some classes for VLIW packetizing and other classes for VLIW scheduling. Apparently these classes share something in common. Can someone explain why they should have separate implementation (i.e., in different function passes)? Best regards. -- æšć‹‡ć‹‡ (Yang Yongyong)
2012 Oct 05
0
[LLVMdev] LLVM Loop Vectorizer
----- Original Message ----- > From: "Nadav Rotem" <nrotem at apple.com> > To: "llvmdev at cs.uiuc.edu Mailing List" <llvmdev at cs.uiuc.edu> > Sent: Friday, October 5, 2012 1:14:47 AM > Subject: [LLVMdev] LLVM Loop Vectorizer > > Hi, > > We are starting to work on an LLVM loop vectorizer. There's number of > different projects that
2012 Oct 05
1
[LLVMdev] LLVM Loop Vectorizer
Why not just have a hook into the TargetInstrInfo to query for the cost of an instruction? This is already used in many places throughout the optimizers. > -----Original Message----- > From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] > On Behalf Of Das, Dibyendu > Sent: Friday, October 05, 2012 2:00 AM > To: Nadav Rotem; llvmdev at cs.uiuc.edu Mailing
2012 Oct 05
2
[LLVMdev] LLVM Loop Vectorizer
----- Original Message ----- > From: "Dibyendu Das" <Dibyendu.Das at amd.com> > To: "Nadav Rotem" <nrotem at apple.com>, "llvmdev at cs.uiuc.edu Mailing List" <llvmdev at cs.uiuc.edu> > Sent: Friday, October 5, 2012 3:59:56 AM > Subject: Re: [LLVMdev] LLVM Loop Vectorizer > > I think we should try to abstract the costs of
2012 Jul 25
2
[LLVMdev] VLIW code generation for LLVM backend
Hi, It seems the only one VLIW target Hexagon in LLVM 3.2 devel uses a straightforward way to emit its VLIW-style asm codes. It uses a list scheduler to schedule on DAG and a simple packetizer to wrap the emitted asm instructions. Both scheduling and packetizing work on basic blocks. so, is there any plan to implement better optimization methods such as trace scheduling, software pipelining, ...
2012 Jul 23
0
[LLVMdev] Differences and Relationship between VLIW scheduler and VLIW packetizer?
Hi Yang, They have different implementations because they don't do the same thing and don't rely on the same structures. VLIW scheduling works on the SelectionDAG, right after the instruction selection, and it will schedule the DAG but it will not build any packet. The VLIW packetizer has been designed to work with machine instructions, using the ScheduleDAGInstr, and it does build
2014 Jan 09
2
[LLVMdev] basic block missing after MachineInstr packetizing
Sergei, Thank you for your attention. My target is a custom VLIW DSP. I am not sure dependency dag is correct when it gets scheduled and packetized. Months ago, I submitted a bug at http://llvm.org/bugs/show_bug.cgi?id=17894 which explained more details. I am not sure my understanding of this bug is proper, but modified my local codes this way and it works for my target when scheduling and
2012 Oct 05
0
[LLVMdev] LLVM Loop Vectorizer
Perhaps we can parameterize the size of the vector while vectorizing @ llvm and fix up the loop iterators in a target specific pass. -----Original Message----- From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] On Behalf Of Hal Finkel Sent: Friday, October 05, 2012 8:30 PM To: Das, Dibyendu Cc: llvmdev at cs.uiuc.edu Mailing List Subject: Re: [LLVMdev] LLVM Loop
2012 Oct 05
2
[LLVMdev] LLVM Loop Vectorizer
----- Original Message ----- > From: "Ramshankar Ramanarayanan" <Ramshankar.Ramanarayanan at amd.com> > To: "Hal Finkel" <hfinkel at anl.gov>, "Dibyendu Das" <Dibyendu.Das at amd.com> > Cc: "llvmdev at cs.uiuc.edu Mailing List" <llvmdev at cs.uiuc.edu> > Sent: Friday, October 5, 2012 11:00:39 AM > Subject: RE: [LLVMdev]
2012 Aug 07
0
[LLVMdev] VLIW code generation for LLVM backend
Yang, There is work currently underway to add SW pipelining and some sort of global scheduling to Hexagon, but if there is some interest to it from other targets, it would be helpful to know. What is your involvement with this? Sergei Larin -- Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum. > -----Original Message----- > From: llvmdev-bounces at cs.uiuc.edu
2006 Dec 01
4
simple parallel computing on single multicore machine
Dear List, the advent of multicore machines in the consumer segment makes me wonder whether it would, at least in principle, be possible to divide a computational task into more slave R processes running on the different cores of the same processor, more or less in the way package SNOW would do on a cluster. I am thinking of simple 'embarassingly parallel' problems, just like inverting
2012 Oct 05
0
[LLVMdev] LLVM Loop Vectorizer
If -simd option is specified opt could do validity checks, dependency analysis and such and recognize that a loop can be executed in parallel and as the -simd option is specified, convert the data types to vector instructions and add the scaling factor to the loop's iterators. Following this there can be an early machine function pass that sets up processor specific value in all of
2012 Dec 01
3
[LLVMdev] [RFC] "noclone" function attribute
Hi Krzysztof, Yes, however this can be solved in one of two ways: 1) Fully inline the call graph for all leaf functions that call the barrier intrinsic. This is done on several implementations as standard already, and "no call stack" is a requirement for Karrenberg's algorithm at least. 2) Apply the "noclone" attribute transitively such that if a function may
2012 Aug 08
2
[LLVMdev] VLIW code generation for LLVM backend
Larin, Thank you for telling me about this. Our lab is planning to design a VLIW DSP and has to make a choice between GCC and LLVM, for which I take responsibility. As we all know that GCC's codes possess a long history and has a somewhat bad learning curve, I suggest choosing LLVM. It seems now the only drawback is its poor support for VLIW architecture. And so if we can count on
2010 Apr 15
1
can't find "daphnia.txt" and others while working through Crawley's R-Book
I have a feeling that this is an embarassingly simple fix, but I've been at it for most of the morning and can't get things figured out. I'm trying to work through some examples in Crawley's "The R Book". I have installed packages and libraries as described in the book, but when I try, for example: data<-read.table("c:\\temp\\daphnia.txt", header=T)
2008 Jul 18
2
symbolic linking to library files
I handle SysAdmin for a multi-user Linux box, with R 2.7.1 compiled and installed to make usee of ACML (Opteron chips). The library files (packages) are installed to /usr/local/lib64/R/library Everything works as it should, except for the following. Say I have a user (an R developer) who has developed a package called Blaster. We'll call the user guru. Now, /home/guru/Blaster, contains the