Displaying 20 results from an estimated 900 matches similar to: "[LLVMdev] How to call the llvm.prefetch intrinsic ?"
2013 Apr 10
0
[LLVMdev] How to call the llvm.prefetch intrinsic ?
Alexandra,
I'm not sure what you mean by "replace", but I have code that does this to insert prefetches:
Type *I8Ptr = Type::getInt8PtrTy((*I)->getContext(), PtrAddrSpace);
Value *PrefPtrValue = ...
IRBuilder<> Builder(MemI);
Module *M = (*I)->getParent()->getParent();
Type *I32 = Type::getInt32Ty((*I)->getContext());
Value
2011 Jul 19
4
[LLVMdev] speculative parallelization in LLVM
Hi Tobi,
Thank you for your reply :).
I know that array accesses are handled as pointers in LLVM, but as I understood
Polly is focused on statically analysable code. As you mentioned: proving that
pointer accesses actually represent virtual array accesses.
In the case of a linked list for example, parsed with a pointer p = p->next, I
expect that Polly will not handle this code. So I
2011 Jul 19
3
[LLVMdev] speculative parallelization in LLVM
Hi Renato,
No, I cannot, but in case it is, I want to take advantage of this. In case it is
not, the instrumentation code will detect this at runtime and simply roll back
to the original version. I will always keep an original version available, in
addition to the ones I modify with Polly. However, initially I will speculate
that it is allocated contiguously.
Thanks,
Alexandra
2011 Jul 19
0
[LLVMdev] speculative parallelization in LLVM
On 19 July 2011 10:12, Jimborean Alexandra <xinfinity_a at yahoo.com> wrote:
> %curr_array = alloca [10 x %struct.linked], align 8
>
> while..
> %tmp16 = getelementptr inbounds [10 x %struct.linked]* %curr_array, i32 0,
> i32 1
Hi Alexandra,
Can you guarantee that the linked list will be allocated in contiguous memory?
cheers,
--renato
2011 Sep 08
4
[LLVMdev] multi-threading in llvm
Hi,
I want to execute the iterations of a loop in parallel, by inserting calls either to pthreads or to the gomp library at the LLVM IR level. As a first step, I inserted an omp pragma in a C file and compiled it with llvm-gcc to check the generated LLVM code. If I understand correctly, to parallelize the loop in LLVM IR, I have to separate the loop in a new function, put all required parameters
2011 May 09
2
[LLVMdev] get LPPassManager to use it in llvm::CloneLoop
Hi,
I try to write a FunctionPass that, among other tasks, has to clone some loops
from the current function.
How can I obtain the LPPassManager in order to use the CloneLoop function.
In a LoopPass this is a parameter for the runOnLoop, but how can I obtain it in
a FunctionPass?
I tried simply by creating a new instance :
ValueMap<const Value *, Value* > VMap;
2011 Jul 20
3
[LLVMdev] print the memory address computed by getelementptr
Hi,
I want to print the memory locations computed by getelementptr. As I understood,
getelementptr does not access the memory, but it contains the address it
computes. I want to print these addresses at runtime (or process them). So, I
try to build a function that takes as argument a pointer and prints its value.
And to call this function, by sending the gep instruction as a parameter.
2011 Aug 03
2
[LLVMdev] scalar evolution to determine access functions in arays
Only because in my next passes I change the CFG significantly and it is very hard to maintain the values of the Phi nodes.
Alexandra
________________________________
From: Tobias Grosser <tobias at grosser.es>
To: Jimborean Alexandra <xinfinity_a at yahoo.com>
Cc: "llvmdev at cs.uiuc.edu" <llvmdev at cs.uiuc.edu>; "luismastrangelo at gmail.com"
2011 Jul 18
3
[LLVMdev] speculative parallelization in LLVM
Hi,
I plan to do some speculative parallelization in LLVM using Polly and I target
loops that contain pointers and indirect references. As far as I know, Polly
generates optimized code starting from the SCoPs, therefore I plan to replace
all pointer accesses with array accesses, such that Polly will accept the code.
Each array access should use a liner function of the enclosing loops indices.
2011 Jul 27
3
[LLVMdev] scalar evolution to determine access functions in arays
Hello,
How can I compute the functions on the loop iterators used as array indices?
For example:
for i = 0, N
for j = 0, M
A[2*i + j - 10] = ...
Can I obtain that this instruction A[2*i + j - 10]= .. always accesses memory using a function f(i,j) = 2*i + j - 10 + base_address_of_A
If I run the scalar evolution pass on this code I obtain:
%arrayidx =
2012 Nov 22
2
[LLVMdev] Set the minimum number of allocated bits for a variable
Hi,
I would like to force the minimum number of bits allocated for a variable in memory to be 16. From what I have seen, i1 is already represented on 8 bits. So, the only change would be to represent i1 and i8 on 16 bits, as all other types already fulfill this condition.
TargetData can help by setting a higher alignment, thus although the type is i1 or i8, the number of allocated bits is 16.
2011 Mar 30
2
[LLVMdev] how to detect if block N is reachable from block M ?
Hi,
Is there any method to check if there is a path in the CFG from block M to block
N, but M does not necessarily dominate block N?
In other words, if N is reachable from M.
Thanks,
Alexandra
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20110330/7d533e42/attachment.html>
2011 May 09
0
[LLVMdev] get LPPassManager to use it in llvm::CloneLoop
On Mon, May 9, 2011 at 1:06 AM, Jimborean Alexandra
<xinfinity_a at yahoo.com> wrote:
> Hi,
>
> I try to write a FunctionPass that, among other tasks, has to clone some
> loops from the current function.
> How can I obtain the LPPassManager in order to use the CloneLoop function.
> In a LoopPass this is a parameter for the runOnLoop, but how can I obtain it
> in a
2011 Mar 31
1
[LLVMdev] how to detect if block N is reachable from block M ?
On Wed, Mar 30, 2011 at 11:35 PM, Eli Friedman <eli.friedman at gmail.com> wrote:
> On Wed, Mar 30, 2011 at 10:14 AM, Jimborean Alexandra
> <xinfinity_a at yahoo.com> wrote:
>> Hi,
>>
>> Is there any method to check if there is a path in the CFG from block M to
>> block N, but M does not necessarily dominate block N?
>> In other words, if N is
2012 Nov 22
0
[LLVMdev] Set the minimum number of allocated bits for a variable
Hi Alexandra,
I'm not sure want you want to do. Is the data layout string (http://llvm.org/docs/LangRef.html#datalayout, also usually set by each target specific *TargetMachine constructor), for setting the alignment good enough for you?
Or is it more than the alignment you want to control? Do you have a target with 16-bit bytes? In that case, there is quite a lot of changes that needs to be
2011 Jul 19
0
[LLVMdev] speculative parallelization in LLVM
On 07/19/2011 11:46 AM, Jimborean Alexandra wrote:
> Hi Renato,
>
> No, I cannot, but in case it is, I want to take advantage of this. In
> case it is not, the instrumentation code will detect this at runtime and
> simply roll back to the original version. I will always keep an original
> version available, in addition to the ones I modify with Polly. However,
> initially I
2011 Aug 03
0
[LLVMdev] scalar evolution to determine access functions in arays
On 08/03/2011 08:35 AM, Jimborean Alexandra wrote:
> Hello Tobi,
>
> You are right, we need to run some other passes before running the
> scalar evolution pass. The sequence that I run for this example is -O3
> -loop-simplify -reg2mem. This is why I did not obtain the expressions
> depending on the loop indices. So I removed the reg2mem pass and scalar
> evolution computes the
2011 Aug 03
0
[LLVMdev] scalar evolution to determine access functions in arays
On 08/03/2011 10:22 AM, Jimborean Alexandra wrote:
> Only because in my next passes I change the CFG significantly and it is
> very hard to maintain the values of the Phi nodes.
OK. In Polly we developed a pass called, 'independent-blocks-pass'. It
basically creates basic blocks, that can easily be rescheduled without
stopping the scalar evolution analysis to work. Maybe something
2011 Aug 03
2
[LLVMdev] scalar evolution to determine access functions in arays
Hello Tobi,
You are right, we need to run some other passes before running the scalar evolution pass. The sequence that I run for this example is -O3 -loop-simplify -reg2mem. This is why I did not obtain the expressions depending on the loop indices. So I removed the reg2mem pass and scalar evolution computes the correct functions.
However, I need to run the reg2mem pass (or any other that
2011 Sep 12
4
[LLVMdev] multi-threading in llvm
On 09/12/2011 04:28 PM, Sebastian Pop wrote:
> Hi Alexandra,
>
> On Thu, Sep 8, 2011 at 13:53, Jimborean Alexandra<xinfinity_a at yahoo.com> wrote:
>> I had a look at the CodeGeneration from Polly. Is it possible to use it
>> without creating the Scops, by transforming it into a LoopPass?
>
> Yes. If you don't want to use the autopar of Polly and just rely on