Displaying 3 results from an estimated 3 matches for "v16x16i16".
2018 Jan 17
0
Does it make sense to upstream some MVT's?
...the bloat of expressing all possible combinations.
How does LLVM handle 2D vectors/matrices? I haven’t moved on to v6.0.0 yet, but so far as I can tell v5.0.x only abstracts 1D vectors: N-elements of M-bits, and having types like ‘v256i16’ is not quite the same as having support for let’s say ‘v16x16i16’. Having a high-level abstraction for reasoning about NxN-elements of M-bits would be really useful/cool, especially for exotic instructions with special register allocation requirements, and for classic nested loops such as convolutions.
MartinO
From: llvm-dev [mailto:llvm-dev...
2018 Jan 17
3
Does it make sense to upstream some MVT's?
Hi,
Our backend for Pixel Visual Core uses some MVT's that aren't upstream.
Does it make sense to upstream them? I figure that as architectures get
wider, we'll eventually have "all" possible combinations of widths and
types, but on the other hand having code that isn't used by current
backends in tree isn't great.
These are the MVT's that we have added:
16x16
2018 Jan 17
1
Does it make sense to upstream some MVT's?
...via a
restricted set of operations that we have intrinsics for.
-- Sean Silva
> I haven’t moved on to v6.0.0 yet, but so far as I can tell v5.0.x only
> abstracts 1D vectors: N-elements of M-bits, and having types like ‘v256i16’
> is not quite the same as having support for let’s say ‘v16x16i16’.
> Having a high-level abstraction for reasoning about NxN-elements of M-bits
> would be really useful/cool, especially for exotic instructions with
> special register allocation requirements, and for classic nested loops such
> as convolutions.
>
>
>
> MartinO...