Displaying 20 results from an estimated 20 matches for "nvptxtargetmachin".
Did you mean:
nvptxtargetmachine
2013 Feb 07
5
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...d can have math right in
the IR, regardless the language it was lowered from. I can confirm this
method works for us very well with C and Fortran, but in order to make
accurate replacements of unsupported intrinsics calls, it needs to become
aware of NVPTX backend capabilities in the form of:
bool NVPTXTargetMachine::
isIntrinsicSupported(Function& intrinsic) and
string NVPTXTargetMachine::whichMathCallReplacesIntrinsic(Function&
intrinsic)
> I would prefer not to lower such things in the back-end since different
compilers may want to implement such functions differently based on speed
vs. accurac...
2013 Feb 09
0
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...the IR, regardless the language it was lowered from. I can confirm this
> method works for us very well with C and Fortran, but in order to make
> accurate replacements of unsupported intrinsics calls, it needs to become
> aware of NVPTX backend capabilities in the form of:
>
> bool NVPTXTargetMachine::
> isIntrinsicSupported(Function& intrinsic) and
> string NVPTXTargetMachine::whichMathCallReplacesIntrinsic(Function&
> intrinsic)
>
> > I would prefer not to lower such things in the back-end since different
> compilers may want to implement such functions different...
2013 Feb 08
0
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...d can have math right in the IR, regardless the language it was lowered from. I can confirm this method works for us very well with C and Fortran, but in order to make accurate replacements of unsupported intrinsics calls, it needs to become aware of NVPTX backend capabilities in the form of:
bool NVPTXTargetMachine::
isIntrinsicSupported(Function& intrinsic) and
string NVPTXTargetMachine::whichMathCallReplacesIntrinsic(Function& intrinsic)
> I would prefer not to lower such things in the back-end since different compilers may want to implement such functions differently based on speed vs. accurac...
2013 Feb 17
2
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...the IR, regardless the language it was lowered from. I can confirm this
> method works for us very well with C and Fortran, but in order to make
> accurate replacements of unsupported intrinsics calls, it needs to become
> aware of NVPTX backend capabilities in the form of:
>
> bool NVPTXTargetMachine::****
>
> isIntrinsicSupported(Function& intrinsic) and
> string NVPTXTargetMachine::whichMathCallReplacesIntrinsic(Function&
> intrinsic)
>
> > I would prefer not to lower such things in the back-end since different
> compilers may want to implement such functions...
2015 Jan 19
6
[LLVMdev] X86TargetLowering::LowerToBT
...*s caller have enough context to match the immediate IR version?
In fact, lli isn't calling *LowerToBT* so it isn't matching. But isn't this
really a *peephole optimization* issue?
LLVM has a generic peephole optimizer, *CodeGen/PeepholeOptimizer.cpp
*which has
exactly one subclass in *NVPTXTargetMachine.cpp.*
But isn't it better to deal with X86 *LowerToBT* in a
*PeepholeOptimizer* subclass
where you have a small window of instructions rather than during pseudo
instruction expansion where you have really one instruction?
*PeepholeOptimizer *doesn't seem to be getting much attention and c...
2013 Feb 17
0
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...the language it was lowered from. I can confirm this
>> method works for us very well with C and Fortran, but in order to make
>> accurate replacements of unsupported intrinsics calls, it needs to become
>> aware of NVPTX backend capabilities in the form of:
>>
>> bool NVPTXTargetMachine::****
>>
>> isIntrinsicSupported(Function& intrinsic) and
>> string NVPTXTargetMachine::whichMathCallReplacesIntrinsic(Function&
>> intrinsic)
>>
>> > I would prefer not to lower such things in the back-end since different
>> compilers may want...
2013 Feb 17
2
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...lowered from. I can confirm this
>>> method works for us very well with C and Fortran, but in order to make
>>> accurate replacements of unsupported intrinsics calls, it needs to become
>>> aware of NVPTX backend capabilities in the form of:
>>>
>>> bool NVPTXTargetMachine::****
>>>
>>> isIntrinsicSupported(Function& intrinsic) and
>>> string NVPTXTargetMachine::whichMathCallReplacesIntrinsic(Function&
>>> intrinsic)
>>>
>>> > I would prefer not to lower such things in the back-end since
>>> d...
2013 Feb 17
0
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...confirm
>>>> this method works for us very well with C and Fortran, but in order to make
>>>> accurate replacements of unsupported intrinsics calls, it needs to become
>>>> aware of NVPTX backend capabilities in the form of:
>>>>
>>>> bool NVPTXTargetMachine::****
>>>>
>>>> isIntrinsicSupported(Function& intrinsic) and
>>>> string NVPTXTargetMachine::whichMathCallReplacesIntrinsic(Function&
>>>> intrinsic)
>>>>
>>>> > I would prefer not to lower such things in the back-...
2013 Feb 17
2
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...>> this method works for us very well with C and Fortran, but in order to make
>>>>> accurate replacements of unsupported intrinsics calls, it needs to become
>>>>> aware of NVPTX backend capabilities in the form of:
>>>>>
>>>>> bool NVPTXTargetMachine::****
>>>>>
>>>>> isIntrinsicSupported(Function& intrinsic) and
>>>>> string NVPTXTargetMachine::whichMathCallReplacesIntrinsic(Function&
>>>>> intrinsic)
>>>>>
>>>>> > I would prefer not to lower...
2013 Jun 05
0
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...thod works for us very well with C and Fortran, but in order to make
>>>>>> accurate replacements of unsupported intrinsics calls, it needs to become
>>>>>> aware of NVPTX backend capabilities in the form of:
>>>>>>
>>>>>> bool NVPTXTargetMachine::****
>>>>>>
>>>>>> isIntrinsicSupported(Function& intrinsic) and
>>>>>> string NVPTXTargetMachine::whichMathCallReplacesIntrinsic(Function&
>>>>>> intrinsic)
>>>>>>
>>>>>> > I wo...
2013 Jun 05
2
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...s very well with C and Fortran, but in order to make
>>>>>>> accurate replacements of unsupported intrinsics calls, it needs to become
>>>>>>> aware of NVPTX backend capabilities in the form of:
>>>>>>>
>>>>>>> bool NVPTXTargetMachine::****
>>>>>>>
>>>>>>> isIntrinsicSupported(Function& intrinsic) and
>>>>>>> string NVPTXTargetMachine::whichMathCallReplacesIntrinsic(Function&
>>>>>>> intrinsic)
>>>>>>>
>>>&g...
2015 Jan 19
2
[LLVMdev] X86TargetLowering::LowerToBT
...ntext to match the immediate IR
> version? In fact, lli isn't calling *LowerToBT* so it isn't matching. But
> isn't this really a *peephole optimization* issue?
>
> LLVM has a generic peephole optimizer, *CodeGen/PeepholeOptimizer.cpp *which has
> exactly one subclass in *NVPTXTargetMachine.cpp.*
>
> But isn't it better to deal with X86 *LowerToBT* in a *PeepholeOptimizer* subclass
> where you have a small window of instructions rather than during pseudo
> instruction expansion where you have really one instruction?
> *PeepholeOptimizer *doesn't seem to be gett...
2013 Jun 21
0
[LLVMdev] About writing a modulePass in addPreEmitPass() for NVPTX
Are you sure you are initializing your pass properly? Can you show a
stripped down version of your pass?
On Fri, Jun 21, 2013 at 7:27 AM, Anthony Yu <swpenim at gmail.com> wrote:
> Hello,
>
> I want to write a modulePass in addPreEmitPass() for NVPTX, but I
> encounter an assertion failed when executing clang.
>
> Here is my error message.
> ====
> Pass 'NVPTX
2012 May 01
2
[LLVMdev] [llvm-commits] [PATCH][RFC] NVPTX Backend
...egen is
> also relatively limited.
>
> This is in no particular order:
>
> * Is there any reason why the 32-bit arch name is nvptx and the 64-bit arch
> name is nvptx64, especially as the default for the NVCC compiler is to pass
> the -m64 flag? Also, all the internal naming (NVPTXTargetMachine for
> example) use 32 / 64 suffixes.
As far as I know, there is no fundamental reason for this. If it bugs you too much, I'm sure we could change it. :)
>
> * The register naming seems a little arbitrary as well, using FL prefixes for 64-
> bit float and da prefixes for 64-bit...
2012 May 02
0
[LLVMdev] [llvm-commits] [PATCH][RFC] NVPTX Backend
...backends, plus my knowledge of tablegen is
also relatively limited.
This is in no particular order:
* Is there any reason why the 32-bit arch name is nvptx and the 64-bit arch
name is nvptx64, especially as the default for the NVCC compiler is to pass
the -m64 flag? Also, all the internal naming (NVPTXTargetMachine for
example) use 32 / 64 suffixes.
</pre>
</blockquote>
<pre wrap=""><!---->
As far as I know, there is no fundamental reason for this. If it bugs you too much, I'm sure we could change it. :)
</pre>
<blockquote type="cite">...
2013 Jun 22
2
[LLVMdev] About writing a modulePass in addPreEmitPass() for NVPTX
I write my pass in a mix way of NVPTXAllocaHoisting, NVPTXSplitBBatBar and
transforms/Hello.
The following is part of the codes:
in NVPTXTargetMachine.cpp
bool NVPTXPassConfig::addPreEmitPass()
{
addPass(createTest());
return false;
}
in NVPTXTest.h
namespace llvm{
class NVPTXTest : public ModulePass
{
void getAn...
2013 Jun 21
2
[LLVMdev] About writing a modulePass in addPreEmitPass() for NVPTX
Hello,
I want to write a modulePass in addPreEmitPass() for NVPTX, but I encounter
an assertion failed when executing clang.
Here is my error message.
====
Pass 'NVPTX Assembly Printer' is not initialized.
Verify if there is a pass dependency cycle.
Required Passes:
llc: /home/pyyu/local/llvm/lib/IR/PassManager.cpp:637: void
llvm::PMTopLevelManager::schedulePass(llvm::Pass*): Assertion
2015 Jan 22
2
[LLVMdev] X86TargetLowering::LowerToBT
...xt to match the immediate IR version? In fact, lli isn't calling LowerToBT so it isn't matching. But isn't this really a peephole optimization issue?
>>>>
>>>> LLVM has a generic peephole optimizer, CodeGen/PeepholeOptimizer.cpp which has exactly one subclass in NVPTXTargetMachine.cpp.
>>>>
>>>> But isn't it better to deal with X86 LowerToBT in a PeepholeOptimizer subclass where you have a small window of instructions rather than during pseudo instruction expansion where you have really one instruction? PeepholeOptimizer doesn't seem to be g...
2012 Apr 29
0
[LLVMdev] [llvm-commits] [PATCH][RFC] NVPTX Backend
...is also relatively limited.<br>
<br>
This is in no particular order:<br>
<br>
* Is there any reason why the 32-bit arch name is nvptx and the 64-bit
arch name is nvptx64, especially as the default for the NVCC compiler
is to pass the -m64 flag? Also, all the internal naming
(NVPTXTargetMachine for example) use 32 / 64 suffixes.<br>
<br>
* The register naming seems a little arbitrary as well, using FL
prefixes for 64-bit float and da prefixes for 64-bit float arguments
for example. <br>
<br>
* Something I picked up in the NVVM IR spec - it seems to only be
possibl...
2012 Apr 27
2
[LLVMdev] [llvm-commits] [PATCH][RFC] NVPTX Backend
Thanks for the feedback!
The attached patch addresses the style issues that have been found.
From: Jim Grosbach [mailto:grosbach at apple.com]
Sent: Wednesday, April 25, 2012 2:22 PM
To: Justin Holewinski
Cc: llvm-commits at cs.uiuc.edu; llvmdev at cs.uiuc.edu; Vinod Grover
Subject: Re: [llvm-commits] [PATCH][RFC] NVPTX Backend
Hi Justin,
Cool stuff, to be sure. Excited to see this.
As a