search for: cicc

Displaying 14 results from an estimated 14 matches for "cicc".

Did you mean: ccc
2007 Sep 13
3
Someone Using Java/R Interface--- JRI ?
Hi all, I am writing R code and I want to interface with JAVA i.e. I want to call R from JAVA. That's why i have installed JRI on my machine. There is also documentation available in "Javadoc". But as i am very new to JAVA and well as R, I don't understand much of it. If someone is using this package i.e. JRI, please let me know whether i am going in right direction or not.
2007 Jun 24
1
JRI and Axis Web Service
Hi all, It is my first time to use the R-help mailing list and doesn't have too much R acknowledge. The reason that I am writing this email is looking for helps of using JRI in Java Axis Web Service. Well, I am not quite sure if this is the right place to ask this kind of questions, but I can't find the JRI mailing list. So please give me some hints if this is not the right place to
2017 Jun 09
1
NVPTX Back-end: relocatable device code support for dynamic parallelism
...that this feature is not supported. Does anyone know is this is the case? I also tried to find out what nvcc is doing when setting rdc to on, but hat a few problem trying to understand whats going on. I will attach the verbose output of nvcc. I have no clue what the binaries cudafe/cudafe++ and cicc are doing so its rather hard to guess whats happening. There are additional options like -D__CUDACC_RDC__, --device-c and --compile-only that are not used when rdc is off. All but --device-c can be used with clang and i can compile my program, however i can't get it to run properly. For ea...
2013 Jun 05
0
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
Dear all, FWIW, I've tested libdevice.compute_20.10.bc and libdevice.compute_30.10.bc from /cuda/nvvm/libdevice shipped with CUDA 5.5 preview. IR is compatible with LLVM 3.4 trunk that we use. Results are correct, performance - almost the same as what we had before with cicc-sniffed IR, or maybe <10% better. Will test libdevice.compute_35.10.bc once we will get K20 support. Thanks for addressing this, - D. 2013/2/17 Dmitry Mikushin <dmitry at kernelgen.org> > > The issue is really that there is no standard math library for PTX. > > Well, formall...
2013 Jun 05
2
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...all, > > FWIW, I've tested libdevice.compute_20.10.bc and > libdevice.compute_30.10.bc from /cuda/nvvm/libdevice shipped with CUDA 5.5 > preview. IR is compatible with LLVM 3.4 trunk that we use. Results are > correct, performance - almost the same as what we had before with > cicc-sniffed IR, or maybe <10% better. Will test libdevice.compute_35.10.bc > once we will get K20 support. > > Thanks for addressing this, > - D. > > > 2013/2/17 Dmitry Mikushin <dmitry at kernelgen.org> > >> > The issue is really that there is no standard math...
2013 Feb 17
2
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...>> Sorry for delay with reply, >>>> >>>> Answers on your questions could be different, depending on the math >>>> library placement in the code generation pipeline. At KernelGen, we >>>> currently have a user-level CUDA math module, adopted from cicc internals >>>> [1]. It is intended to be linked with the user LLVM IR module, right before >>>> proceeding with the final optimization and backend. Last few months we are >>>> using this method to temporary workaround the absence of many math >>>> fun...
2013 Feb 17
2
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
Dear Yuan, Sorry for delay with reply, Answers on your questions could be different, depending on the math library placement in the code generation pipeline. At KernelGen, we currently have a user-level CUDA math module, adopted from cicc internals [1]. It is intended to be linked with the user LLVM IR module, right before proceeding with the final optimization and backend. Last few months we are using this method to temporary workaround the absence of many math functions, to keep up the speed of applications testing in our compiler...
2017 Jun 14
2
Separate compilation of CUDA code?
Hi, I wonder whether the current version of LLVM supports separate compilation and linking of device code, i.e., is there a flag analogous to nvcc's --relocatable-device-code flag? If not, is there any plan to support this? Thanks! Yuanfeng Peng -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Feb 17
0
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...y at kernelgen.org> wrote: > Dear Yuan, > > Sorry for delay with reply, > > Answers on your questions could be different, depending on the math > library placement in the code generation pipeline. At KernelGen, we > currently have a user-level CUDA math module, adopted from cicc internals > [1]. It is intended to be linked with the user LLVM IR module, right before > proceeding with the final optimization and backend. Last few months we are > using this method to temporary workaround the absence of many math > functions, to keep up the speed of applications tes...
2013 Feb 17
2
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...t; >> Dear Yuan, >> >> Sorry for delay with reply, >> >> Answers on your questions could be different, depending on the math >> library placement in the code generation pipeline. At KernelGen, we >> currently have a user-level CUDA math module, adopted from cicc internals >> [1]. It is intended to be linked with the user LLVM IR module, right before >> proceeding with the final optimization and backend. Last few months we are >> using this method to temporary workaround the absence of many math >> functions, to keep up the speed of...
2013 Feb 17
0
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
...gt;>> >>> Sorry for delay with reply, >>> >>> Answers on your questions could be different, depending on the math >>> library placement in the code generation pipeline. At KernelGen, we >>> currently have a user-level CUDA math module, adopted from cicc internals >>> [1]. It is intended to be linked with the user LLVM IR module, right before >>> proceeding with the final optimization and backend. Last few months we are >>> using this method to temporary workaround the absence of many math >>> functions, to keep...
2015 Apr 08
5
[LLVMdev] CUDA front-end (CUDA to LLVM IR)
Hi, I wanted to ask whether there is ongoing effort (or an already established tool) that enables to convert CUDA kernels (that uses CUDA specific intrinsics, e.g., threadId.x, __syncthreads(), ...) to LLVM IR. I am aware that I can do this for OpenCL with the help of libclc but I can not find something similar for CUDA. Thanks -------------- next part -------------- An HTML attachment was
2013 Feb 08
0
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
Yes, it helps a lot and we are working on it. A few questions, 1) What will be your use model of this library? Will you run optimization phases after linking with the library? If so, what are they? 2) Do you care if the names of functions differ from those in libm? For example, it would be gpusin() instead of sin(). 3) Do you need a different library for different host
2013 Feb 07
5
[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all
Hi Justin, gentlemen, I'm afraid I have to escalate this issue at this point. Since it was discussed for the first time last summer, it was sufficient for us for a while to have lowering of math calls into intrinsics disabled at DragonEgg level, and link them against CUDA math functions at LLVM IR level. Now I can say: this is not sufficient any longer, and we need NVPTX backend to deal with