Displaying 2 results from an estimated 2 matches for "__cuda_builtin_blockidx_t".
Did you mean:
__cuda_builtin_blockdim_t
2020 Aug 22
5
Looking for suggestions: Inferring GPU memory accesses
...ion given concrete values
for the grid/block and executing thread?
(Assume CUDA only for now)
My initial idea is to replace all uses of dim-related values, e.g:
__cuda_builtin_blockDim_t::__fetch_builtin_x()
__cuda_builtin_gridDim_t::__fetch_builtin_x()
and index related values, e.g:
__cuda_builtin_blockIdx_t::__fetch_builtin_x()
__cuda_builtin_threadIdx_t::__fetch_builtin_x()
with ConstantInts. Then run constant folding on the result and check how
many GEPs have constant values.
Would something like this work or are there complications I am not thinking
of? I'd appreciate any suggestions.
P....
2020 Aug 23
2
Looking for suggestions: Inferring GPU memory accesses
...CUDA only for now)
> >
> > My initial idea is to replace all uses of dim-related values, e.g:
> > __cuda_builtin_blockDim_t::__fetch_builtin_x()
> > __cuda_builtin_gridDim_t::__fetch_builtin_x()
> >
> > and index related values, e.g:
> > __cuda_builtin_blockIdx_t::__fetch_builtin_x()
> > __cuda_builtin_threadIdx_t::__fetch_builtin_x()
> >
> > with ConstantInts. Then run constant folding on the result and check how
> > many GEPs have constant values.
> >
> > Would something like this work or are there complicatio...