Displaying 6 results from an estimated 6 matches for "sum_mul_add32".
2020 May 19
5
[PATCHv2] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 8));
+ sum_add32 = _mm_srai_epi32(sum_add32, 16);
+ sum_add32 = _mm_shuffle_epi32(sum_add32, 3);
+
+ // [sum(t2[0]..t2[6]), X, X, X] [int32*4]; faster than
multiple _mm_hadds_epi16
+ __m128i sum_mul_add32 = _mm_add_epi16(mul_add16_1, mul_add16_2);
+ sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
_mm_slli_si128(sum_mul_add32, 2));
+ sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
_mm_slli_si128(sum_mul_add32, 4));
+ sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
_mm_slli_si1...
2020 May 18
6
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 8));
+ sum_add32 = _mm_srai_epi32(sum_add32, 16);
+ sum_add32 = _mm_shuffle_epi32(sum_add32, 3);
+
+ // [sum(t2[0]..t2[6]), X, X, X] [int32*4]; faster than
multiple _mm_hadds_epi16
+ __m128i sum_mul_add32 = _mm_add_epi16(mul_add16_1, mul_add16_2);
+ sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
_mm_slli_si128(sum_mul_add32, 2));
+ sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
_mm_slli_si128(sum_mul_add32, 4));
+ sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
_mm_slli_si1...
2020 May 18
0
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...32,
> _mm_slli_si128(sum_add32, 8));
> + sum_add32 = _mm_srai_epi32(sum_add32, 16);
> + sum_add32 = _mm_shuffle_epi32(sum_add32, 3);
> +
> + // [sum(t2[0]..t2[6]), X, X, X] [int32*4]; faster than
> multiple _mm_hadds_epi16
> + __m128i sum_mul_add32 = _mm_add_epi16(mul_add16_1,
> mul_add16_2);
> + sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
> _mm_slli_si128(sum_mul_add32, 2));
> + sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
> _mm_slli_si128(sum_mul_add32, 4));
> + sum_mul_add32 = _mm_add_epi...
2020 May 18
2
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...sum_add32, 8));
>> + sum_add32 = _mm_srai_epi32(sum_add32, 16);
>> + sum_add32 = _mm_shuffle_epi32(sum_add32, 3);
>> +
>> + // [sum(t2[0]..t2[6]), X, X, X] [int32*4]; faster than
>> multiple _mm_hadds_epi16
>> + __m128i sum_mul_add32 = _mm_add_epi16(mul_add16_1, mul_add16_2);
>> + sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
>> _mm_slli_si128(sum_mul_add32, 2));
>> + sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
>> _mm_slli_si128(sum_mul_add32, 4));
>> + sum_mul_add3...
2020 May 20
0
[PATCHv2] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...m_add32, _mm_slli_si128(sum_add32, 8));
> + sum_add32 = _mm_srai_epi32(sum_add32, 16);
> + sum_add32 = _mm_shuffle_epi32(sum_add32, 3);
> +
> + // [sum(t2[0]..t2[6]), X, X, X] [int32*4]; faster than
> multiple _mm_hadds_epi16
> + __m128i sum_mul_add32 = _mm_add_epi16(mul_add16_1, mul_add16_2);
> + sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
> _mm_slli_si128(sum_mul_add32, 2));
> + sum_mul_add32 = _mm_add_epi16(sum_mul_add32,
> _mm_slli_si128(sum_mul_add32, 4));
> + sum_mul_add32 = _mm_add_epi16(su...
2020 May 18
3
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
What do you base this on?
Per https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html :
"For the x86-32 compiler, you must use -march=cpu-type, -msse or
-msse2 switches to enable SSE extensions and make this option
effective. For the x86-64 compiler, these extensions are enabled by
default."
That reads to me like we're fine for SSE2. As stated in my comments,
SSSE3 support must be