Displaying 2 results from an estimated 2 matches for "__cuda_builtin_threadidx_t".
2020 Aug 22
5
Looking for suggestions: Inferring GPU memory accesses
...ecuting thread?
(Assume CUDA only for now)
My initial idea is to replace all uses of dim-related values, e.g:
__cuda_builtin_blockDim_t::__fetch_builtin_x()
__cuda_builtin_gridDim_t::__fetch_builtin_x()
and index related values, e.g:
__cuda_builtin_blockIdx_t::__fetch_builtin_x()
__cuda_builtin_threadIdx_t::__fetch_builtin_x()
with ConstantInts. Then run constant folding on the result and check how
many GEPs have constant values.
Would something like this work or are there complications I am not thinking
of? I'd appreciate any suggestions.
P.S i am new to LLVM
Thanks in advance,
Ees
---------...
2020 Aug 23
2
Looking for suggestions: Inferring GPU memory accesses
...o replace all uses of dim-related values, e.g:
> > __cuda_builtin_blockDim_t::__fetch_builtin_x()
> > __cuda_builtin_gridDim_t::__fetch_builtin_x()
> >
> > and index related values, e.g:
> > __cuda_builtin_blockIdx_t::__fetch_builtin_x()
> > __cuda_builtin_threadIdx_t::__fetch_builtin_x()
> >
> > with ConstantInts. Then run constant folding on the result and check how
> > many GEPs have constant values.
> >
> > Would something like this work or are there complications I am not
> thinking
> > of? I'd appreciate any...