search for: _mm_madd_epi16

Displaying 14 results from an estimated 14 matches for "_mm_madd_epi16".

2015 Mar 13
1
[RFC PATCH v3] Intrinsics/RTCD related fixes. Mostly x86.
...43210 = _mm_loadu_si128((__m128i *)(&x[i + 0])); - inVec2_76543210 = _mm_loadu_si128((__m128i *)(&y[i + 0])); - - inVec1_FEDCBA98 = _mm_loadu_si128((__m128i *)(&x[i + 8])); - inVec2_FEDCBA98 = _mm_loadu_si128((__m128i *)(&y[i + 8])); - - inVec1_76543210 = _mm_madd_epi16(inVec1_76543210, inVec2_76543210); - inVec1_FEDCBA98 = _mm_madd_epi16(inVec1_FEDCBA98, inVec2_FEDCBA98); - - acc1 = _mm_add_epi32(acc1, inVec1_76543210); - acc2 = _mm_add_epi32(acc2, inVec1_FEDCBA98); - } +#if defined(OPUS_X86_MAY_HAVE_SSE) && !defined(FIXED_POINT)...
2015 Mar 12
1
[RFC PATCHv2] Intrinsics/RTCD related fixes. Mostly x86.
...43210 = _mm_loadu_si128((__m128i *)(&x[i + 0])); - inVec2_76543210 = _mm_loadu_si128((__m128i *)(&y[i + 0])); - - inVec1_FEDCBA98 = _mm_loadu_si128((__m128i *)(&x[i + 8])); - inVec2_FEDCBA98 = _mm_loadu_si128((__m128i *)(&y[i + 8])); - - inVec1_76543210 = _mm_madd_epi16(inVec1_76543210, inVec2_76543210); - inVec1_FEDCBA98 = _mm_madd_epi16(inVec1_FEDCBA98, inVec2_FEDCBA98); - - acc1 = _mm_add_epi32(acc1, inVec1_76543210); - acc2 = _mm_add_epi32(acc2, inVec1_FEDCBA98); - } +#if defined(OPUS_X86_MAY_HAVE_SSE) && !defined(FIXED_POINT)...
2015 Mar 02
13
Patch cleaning up Opus x86 intrinsics configury
The attached patch cleans up Opus's x86 intrinsics configury. It: * Makes ?enable-intrinsics work with clang and other non-GCC compilers * Enables RTCD for the floating-point-mode SSE code in Celt. * Disables use of RTCD in cases where the compiler targets an instruction set by default. * Enables the SSE4.1 Silk optimizations that apply to the common parts of Silk when Opus is built in
2015 Mar 18
5
[RFC PATCH v1 0/4] Enable aarch64 intrinsics/Ne10
Hi All, Since I continue to base my work on top of Jonathan's patch, and my previous Ne10 fft/ifft/mdct_forward/backward patches, I thought it would be better to just post all new patches as a patch series. Please let me know if anyone disagrees with this approach. You can see wip branch of all latest patches at https://git.linaro.org/people/viswanath.puttagunta/opus.git Branch:
2015 Mar 31
6
[RFC PATCH v1 0/5] aarch64: celt_pitch_xcorr: Fixed point series
Hi Timothy, As I mentioned earlier [1], I now fixed compile issues with fixed point and resubmitting the patch. I also have new patch that does intrinsics optimizations for celt_pitch_xcorr targetting aarch64. You can find my latest work-in-progress branch at [2] For reference, you can use the Ne10 pre-built libraries at [3] Note that I am working with Phil at ARM to get my patch at [4]
2015 May 08
8
[RFC PATCH v2]: Ne10 fft fixed and previous 0/8]
Hi All, As per Timothy's suggestion, disabling mdct_forward for fixed point. Only effects armv7,armv8: Extend fixed fft NE10 optimizations to mdct Rest of patches are same as in [1] For reference, latest wip code for opus is at [2] Still working with NE10 team at ARM to get corner cases of mdct_forward. Will update with another patch when issue in NE10 gets fixed. Regards, Vish [1]:
2015 May 15
11
[RFC V3 0/8] Ne10 fft fixed and previous
Hi All, Changes from RFC v2 [1] armv7,armv8: Extend fixed fft NE10 optimizations to mdct - Overflow issue fixed by Phil at ARM. Ne10 wip at [2]. Should be upstream soon. - So, re-enabled using fixed fft for mdct_forward which was disabled in RFCv2 armv7,armv8: Optimize fixed point fft using NE10 library - Thanks to Jonathan Lennox, fixed some build fixes on iOS and some copy-paste errors Rest
2015 Apr 28
10
[RFC PATCH v1 0/8] Ne10 fft fixed and previous
Hello Timothy / Jean-Marc / opus-dev, This patch series is follow up on work I posted on [1]. In addition to what was posted on [1], this patch series mainly integrates Fixed point FFT implementations in NE10 library into opus. You can view my opus wip code at [2]. Note that while I found some issues both with the NE10 library(fixed fft) and with Linaro toolchain (armv8 intrinsics), the work
2020 May 18
6
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...this with generating sum_add32 above and save one _mm_add_epi16, + // but benchmarking shows that as being slower + __m128i add16 = sse_hadds_epi16(add16_1, add16_2); + + // [t1[0], t1[1], ...] -> [t1[0]*28 + t1[1]*24, ...] [int32*4] + __m128i mul32 = _mm_madd_epi16(add16, mul_t1); + + // [sum(mul32), X, X, X] [int32*4]; faster than multiple _mm_hadd_epi32 + mul32 = _mm_add_epi32(mul32, _mm_srli_si128(mul32, 4)); + mul32 = _mm_add_epi32(mul32, _mm_srli_si128(mul32, 8)); + + // s2 += 28*t1[0] + 24*t1[1] + 20*t1[2] + 1...
2020 May 18
0
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...2 > above and save one _mm_add_epi16, > + // but benchmarking shows that as being slower > + __m128i add16 = sse_hadds_epi16(add16_1, add16_2); > + > + // [t1[0], t1[1], ...] -> [t1[0]*28 + t1[1]*24, ...] [int32*4] > + __m128i mul32 = _mm_madd_epi16(add16, mul_t1); > + > + // [sum(mul32), X, X, X] [int32*4]; faster than multiple > _mm_hadd_epi32 > + mul32 = _mm_add_epi32(mul32, _mm_srli_si128(mul32, 4)); > + mul32 = _mm_add_epi32(mul32, _mm_srli_si128(mul32, 8)); > + > + // s2 +=...
2020 May 19
5
[PATCHv2] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...this with generating sum_add32 above and save one _mm_add_epi16, + // but benchmarking shows that as being slower + __m128i add16 = sse_hadds_epi16(add16_1, add16_2); + + // [t1[0], t1[1], ...] -> [t1[0]*28 + t1[1]*24, ...] [int32*4] + __m128i mul32 = _mm_madd_epi16(add16, mul_t1); + + // [sum(mul32), X, X, X] [int32*4]; faster than multiple _mm_hadd_epi32 + mul32 = _mm_add_epi32(mul32, _mm_srli_si128(mul32, 4)); + mul32 = _mm_add_epi32(mul32, _mm_srli_si128(mul32, 8)); + + // s2 += 28*t1[0] + 24*t1[1] + 20*t1[2] + 1...
2020 May 18
2
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...e one _mm_add_epi16, >> + // but benchmarking shows that as being slower >> + __m128i add16 = sse_hadds_epi16(add16_1, add16_2); >> + >> + // [t1[0], t1[1], ...] -> [t1[0]*28 + t1[1]*24, ...] [int32*4] >> + __m128i mul32 = _mm_madd_epi16(add16, mul_t1); >> + >> + // [sum(mul32), X, X, X] [int32*4]; faster than multiple >> _mm_hadd_epi32 >> + mul32 = _mm_add_epi32(mul32, _mm_srli_si128(mul32, 4)); >> + mul32 = _mm_add_epi32(mul32, _mm_srli_si128(mul32, 8)); >> + &g...
2020 May 20
0
[PATCHv2] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
...2 > above and save one _mm_add_epi16, > + // but benchmarking shows that as being slower > + __m128i add16 = sse_hadds_epi16(add16_1, add16_2); > + > + // [t1[0], t1[1], ...] -> [t1[0]*28 + t1[1]*24, ...] [int32*4] > + __m128i mul32 = _mm_madd_epi16(add16, mul_t1); > + > + // [sum(mul32), X, X, X] [int32*4]; faster than multiple > _mm_hadd_epi32 > + mul32 = _mm_add_epi32(mul32, _mm_srli_si128(mul32, 4)); > + mul32 = _mm_add_epi32(mul32, _mm_srli_si128(mul32, 8)); > + > + // s2 +=...
2020 May 18
3
[PATCH] SSE2/SSSE3 optimized version of get_checksum1() for x86-64
What do you base this on? Per https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html : "For the x86-32 compiler, you must use -march=cpu-type, -msse or -msse2 switches to enable SSE extensions and make this option effective. For the x86-64 compiler, these extensions are enabled by default." That reads to me like we're fine for SSE2. As stated in my comments, SSSE3 support must be