Displaying 6 results from an estimated 6 matches for "sum_add32".
2020 May 19
5
[PATCHv2] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...*4]; faster than
multiple _mm_hadds_epi16
+ // Shifting left, then shifting right again and shuffling
(rather than just
+ // shifting right as with mul32 below) to cheaply end up
with the correct sign
+ // extension as we go from int16 to int32.
+ __m128i sum_add32 = _mm_add_epi16(add16_1, add16_2);
+ sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 2));
+ sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 4));
+ sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 8));
+ sum_add3...
2020 May 18
6
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...*4]; faster than
multiple _mm_hadds_epi16
+ // Shifting left, then shifting right again and shuffling
(rather than just
+ // shifting right as with mul32 below) to cheaply end up
with the correct sign
+ // extension as we go from int16 to int32.
+ __m128i sum_add32 = _mm_add_epi16(add16_1, add16_2);
+ sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 2));
+ sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 4));
+ sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 8));
+ sum_add3...
2020 May 18
0
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...hadds_epi16
> + // Shifting left, then shifting right again and shuffling
> (rather than just
> + // shifting right as with mul32 below) to cheaply end up
> with the correct sign
> + // extension as we go from int16 to int32.
> + __m128i sum_add32 = _mm_add_epi16(add16_1, add16_2);
> + sum_add32 = _mm_add_epi16(sum_add32,
> _mm_slli_si128(sum_add32, 2));
> + sum_add32 = _mm_add_epi16(sum_add32,
> _mm_slli_si128(sum_add32, 4));
> + sum_add32 = _mm_add_epi16(sum_add32,
> _mm_slli_si128(sum_add...
2020 May 18
2
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...// Shifting left, then shifting right again and shuffling
>> (rather than just
>> + // shifting right as with mul32 below) to cheaply end up
>> with the correct sign
>> + // extension as we go from int16 to int32.
>> + __m128i sum_add32 = _mm_add_epi16(add16_1, add16_2);
>> + sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 2));
>> + sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 4));
>> + sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32,...
2020 May 20
0
[PATCHv2] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...hadds_epi16
> + // Shifting left, then shifting right again and shuffling
> (rather than just
> + // shifting right as with mul32 below) to cheaply end up
> with the correct sign
> + // extension as we go from int16 to int32.
> + __m128i sum_add32 = _mm_add_epi16(add16_1, add16_2);
> + sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 2));
> + sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 4));
> + sum_add32 = _mm_add_epi16(sum_add32, _mm_slli_si128(sum_add32, 8));
> +...
2020 May 18
3
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
What do you base this on?
Per https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html :
"For the x86-32 compiler, you must use -march=cpu-type, -msse or
-msse2 switches to enable SSE extensions and make this option
effective. For the x86-64 compiler, these extensions are enabled by
default."
That reads to me like we're fine for SSE2. As stated in my comments,
SSSE3 support must be