search for: _mm_shuffle_ps

Displaying 18 results from an estimated 18 matches for "_mm_shuffle_ps".

2013 Jun 07
2
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
..._m128 xsum1 = _mm_loadu_ps(sum); __m128 xsum2 = _mm_setzero_ps(); for (j = 0; j < len-3; j += 4) { const __m128 x0 = _mm_loadu_ps(x+j); const __m128 y0 = _mm_loadu_ps(y+j); const __m128 y3 = _mm_loadu_ps(y+j+3); xsum1 = _mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0x00),y0)); xsum2 = _mm_add_ps(xsum2,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0x55),_mm_shuffle_ps(y0,y3,0x49))); xsum1 = _mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0xaa),_mm_shuffle_ps(y0,y3,0x9e))); xsum2 = _mm_add_ps(xsum2,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0xff),y3...
2013 Jun 07
0
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
...m128 xsum2 = _mm_setzero_ps(); > > for (j = 0; j < len-3; j += 4) { > const __m128 x0 = _mm_loadu_ps(x+j); > const __m128 y0 = _mm_loadu_ps(y+j); > const __m128 y3 = _mm_loadu_ps(y+j+3); > > xsum1 = > _mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0x00),y0)); > xsum2 = > _mm_add_ps(xsum2,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0x55),_mm_shuffle_ps(y0,y3,0x49))); > xsum1 = > _mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0xaa),_mm_shuffle_ps(y0,y3,0x9e))); > xsum2 = > _mm_add_ps(xsum2,_mm_mul_ps...
2004 Aug 06
3
[PATCH] Make SSE Run Time option.
..._mm_setr_ps(_num[9], _num[10], 0, 0); den[2] = _mm_setr_ps(_den[9], _den[10], 0, 0); for (i=0;i<N;i++) { __m128 xx; __m128 yy; /* Compute next filter result */ xx = _mm_load_ps1(x+i); yy = _mm_add_ss(xx, mem[0]); _mm_store_ss(y+i, yy); yy = _mm_shuffle_ps(yy, yy, 0); /* Update memory */ mem[0] = _mm_move_ss(mem[0], mem[1]); mem[0] = _mm_shuffle_ps(mem[0], mem[0], 0x39); mem[0] = _mm_add_ps(mem[0], _mm_mul_ps(xx, num[0])); mem[0] = _mm_sub_ps(mem[0], _mm_mul_ps(yy, den[0])); mem[1] = _mm_move_ss(mem[1], me...
2004 Aug 06
5
[PATCH] Make SSE Run Time option.
> Personally, I don't think much of PNI. The complex arithmetic stuff they > added sets you up for a lot of permute overhead that is inefficient -- > especially on a processor that is already weak on permute. In my opinion, Actually, the new instructions make it possible to do complex multiplies without the need to permute and separate the add and subtract. The really useful
2015 Mar 12
2
[RFC PATCHv2] Intrinsics/RTCD related fixes. Mostly x86.
Nit: in dual_inner_prod_sse, why not do both horizontal sums at the same time? As in: xsum1 = _mm_add_ps(_mm_movelh_ps(xsum1, xsum2), _mm_movehl_ps(xsum2, xsum1)); xsum1 = _mm_add_ps(xsum1, _mm_shuffle_ps(xsum1, xsum1, 0xf5)); _mm_store_ss(xy1, xsum1); _mm_store_ss(xy2, _mm_movehl_ps(xsum1, xsum1)); --John
2015 Mar 13
1
[RFC PATCH v3] Intrinsics/RTCD related fixes. Mostly x86.
...int j; + __m128 xsum1, xsum2; + xsum1 = _mm_loadu_ps(sum); + xsum2 = _mm_setzero_ps(); + + for (j = 0; j < len-3; j += 4) + { + __m128 x0 = _mm_loadu_ps(x+j); + __m128 yj = _mm_loadu_ps(y+j); + __m128 y3 = _mm_loadu_ps(y+j+3); + + xsum1 = _mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0x00),yj)); + xsum2 = _mm_add_ps(xsum2,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0x55), + _mm_shuffle_ps(yj,y3,0x49))); + xsum1 = _mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0xaa), + _mm_shuffle_ps(yj,y3,0x...
2015 Mar 12
1
[RFC PATCHv2] Intrinsics/RTCD related fixes. Mostly x86.
...int j; + __m128 xsum1, xsum2; + xsum1 = _mm_loadu_ps(sum); + xsum2 = _mm_setzero_ps(); + + for (j = 0; j < len-3; j += 4) + { + __m128 x0 = _mm_loadu_ps(x+j); + __m128 yj = _mm_loadu_ps(y+j); + __m128 y3 = _mm_loadu_ps(y+j+3); + + xsum1 = _mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0x00),yj)); + xsum2 = _mm_add_ps(xsum2,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0x55), + _mm_shuffle_ps(yj,y3,0x49))); + xsum1 = _mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0xaa), + _mm_shuffle_ps(yj,y3,0x...
2009 Oct 26
1
[PATCH] Fix miscompile of SSE resampler
...nsigned int len) { int i; - float ret; __m128 sum = _mm_setzero_ps(); for (i=0;i<len;i+=8) { @@ -49,14 +48,12 @@ static inline float inner_product_single(const float *a, const float *b, unsigne } sum = _mm_add_ps(sum, _mm_movehl_ps(sum, sum)); sum = _mm_add_ss(sum, _mm_shuffle_ps(sum, sum, 0x55)); - _mm_store_ss(&ret, sum); - return ret; + _mm_store_ss(ret, sum); } #define OVERRIDE_INTERPOLATE_PRODUCT_SINGLE -static inline float interpolate_product_single(const float *a, const float *b, unsigned int len, const spx_uint32_t oversample, float *frac) { +static in...
2015 Mar 02
13
Patch cleaning up Opus x86 intrinsics configury
The attached patch cleans up Opus's x86 intrinsics configury. It: * Makes ?enable-intrinsics work with clang and other non-GCC compilers * Enables RTCD for the floating-point-mode SSE code in Celt. * Disables use of RTCD in cases where the compiler targets an instruction set by default. * Enables the SSE4.1 Silk optimizations that apply to the common parts of Silk when Opus is built in
2004 Aug 06
2
Notes on 1.1.4 Windows. Testing of SSE Intrinics Code and others
...xmmword ptr [yy],xmm1 261: _mm_store_ss(y+i, yy); 004134AB movaps xmm0,xmmword ptr [yy] 004134B2 mov eax,dword ptr [ebp-64h] 004134B5 mov ecx,dword ptr [ebx+10h] 004134B8 lea edx,[ecx+eax*4] 004134BB movss dword ptr [edx],xmm0 262: yy = _mm_shuffle_ps(yy, yy, 0); 004134BF movaps xmm0,xmmword ptr [yy] 004134C6 movaps xmm1,xmmword ptr [yy] 004134CD shufps xmm1,xmm0,0 004134D1 movaps xmmword ptr [yy],xmm1 263: 264: /* Update memory */ 265: mem[0] = _mm_move_ss(mem[0], mem[1]); 004134D8 movaps xmm0,xm...
2015 Mar 18
5
[RFC PATCH v1 0/4] Enable aarch64 intrinsics/Ne10
Hi All, Since I continue to base my work on top of Jonathan's patch, and my previous Ne10 fft/ifft/mdct_forward/backward patches, I thought it would be better to just post all new patches as a patch series. Please let me know if anyone disagrees with this approach. You can see wip branch of all latest patches at https://git.linaro.org/people/viswanath.puttagunta/opus.git Branch:
2015 Mar 31
6
[RFC PATCH v1 0/5] aarch64: celt_pitch_xcorr: Fixed point series
Hi Timothy, As I mentioned earlier [1], I now fixed compile issues with fixed point and resubmitting the patch. I also have new patch that does intrinsics optimizations for celt_pitch_xcorr targetting aarch64. You can find my latest work-in-progress branch at [2] For reference, you can use the Ne10 pre-built libraries at [3] Note that I am working with Phil at ARM to get my patch at [4]
2015 May 08
8
[RFC PATCH v2]: Ne10 fft fixed and previous 0/8]
Hi All, As per Timothy's suggestion, disabling mdct_forward for fixed point. Only effects armv7,armv8: Extend fixed fft NE10 optimizations to mdct Rest of patches are same as in [1] For reference, latest wip code for opus is at [2] Still working with NE10 team at ARM to get corner cases of mdct_forward. Will update with another patch when issue in NE10 gets fixed. Regards, Vish [1]:
2015 May 15
11
[RFC V3 0/8] Ne10 fft fixed and previous
Hi All, Changes from RFC v2 [1] armv7,armv8: Extend fixed fft NE10 optimizations to mdct - Overflow issue fixed by Phil at ARM. Ne10 wip at [2]. Should be upstream soon. - So, re-enabled using fixed fft for mdct_forward which was disabled in RFCv2 armv7,armv8: Optimize fixed point fft using NE10 library - Thanks to Jonathan Lennox, fixed some build fixes on iOS and some copy-paste errors Rest
2015 Apr 28
10
[RFC PATCH v1 0/8] Ne10 fft fixed and previous
Hello Timothy / Jean-Marc / opus-dev, This patch series is follow up on work I posted on [1]. In addition to what was posted on [1], this patch series mainly integrates Fixed point FFT implementations in NE10 library into opus. You can view my opus wip code at [2]. Note that while I found some issues both with the NE10 library(fixed fft) and with Linaro toolchain (armv8 intrinsics), the work
2012 Jul 06
2
[LLVMdev] Excessive register spilling in large automatically generated functions, such as is found in FFTW
...///////////////////////////////////////// #include <xmmintrin.h> #define __INLINE static inline __attribute__((always_inline)) #define LOAD _mm_load_ps #define STORE _mm_store_ps #define ADD _mm_add_ps #define SUB _mm_sub_ps #define MULT _mm_mul_ps #define STREAM _mm_stream_ps #define SHUF _mm_shuffle_ps #define VLIT4(a,b,c,d) _mm_set_ps(a,b,c,d) #define SWAP(d) SHUF(d,d,_MM_SHUFFLE(2,3,0,1)) #define UNPACK2LO(a,b) SHUF(a,b,_MM_SHUFFLE(1,0,1,0)) #define UNPACK2HI(a,b) SHUF(a,b,_MM_SHUFFLE(3,2,3,2)) #define HALFBLEND(a,b) SHUF(a,b,_MM_SHUFFLE(3,2,1,0)) __INLINE void TX2(__m128 *a, __m128 *b) {...
2008 May 03
2
Resampler (no api)
..._mm_setzero_ps(); + for (i=0;i<len;i+=8) + { + sum = _mm_add_ps(sum, _mm_mul_ps(_mm_loadu_ps(a+i), _mm_loadu_ps(b+i))); + sum = _mm_add_ps(sum, _mm_mul_ps(_mm_loadu_ps(a+i+4), _mm_loadu_ps(b+i+4))); + } + sum = _mm_add_ps(sum, _mm_movehl_ps(sum, sum)); + sum = _mm_add_ss(sum, _mm_shuffle_ps(sum, sum, 0x55)); + _mm_store_ss(&ret, sum); + return ret; +} + +#define OVERRIDE_INTERPOLATE_PRODUCT_SINGLE +static inline float interpolate_product_single(const float *a, const float *b, unsigned int len, const spx_uint32_t oversample, float *frac) { + int i; + float ret; + __m128 sum...
2008 May 03
0
Resampler, memory only variant
..._mm_setzero_ps(); + for (i=0;i<len;i+=8) + { + sum = _mm_add_ps(sum, _mm_mul_ps(_mm_loadu_ps(a+i), _mm_loadu_ps(b+i))); + sum = _mm_add_ps(sum, _mm_mul_ps(_mm_loadu_ps(a+i+4), _mm_loadu_ps(b+i+4))); + } + sum = _mm_add_ps(sum, _mm_movehl_ps(sum, sum)); + sum = _mm_add_ss(sum, _mm_shuffle_ps(sum, sum, 0x55)); + _mm_store_ss(&ret, sum); + return ret; +} + +#define OVERRIDE_INTERPOLATE_PRODUCT_SINGLE +static inline float interpolate_product_single(const float *a, const float *b, unsigned int len, const spx_uint32_t oversample, float *frac) { + int i; + float ret; + __m128 sum...