search for: _mm_mul_pd

Displaying 7 results from an estimated 7 matches for "_mm_mul_pd".

Did you mean: _mm_mul_ps
2004 Aug 06
2
[PATCH] Make SSE Run Time option.
...__m128d Ar, __m128d Ai, __m128d Br, __m128d Bi ) { // http://mathworld.wolfram.com/ComplexMultiplication.html // Cr = Ar * Br - Ai * Bi // Ci = Ai * Br + Ar * Bi __m128d real = _mm_mul_pd( Ar, Br ); __m128d imag = _mm_mul_pd( Ai, Br ); Ai = _mm_mul_pd( Ai, Bi ); Ar = _mm_mul_pd( Ar, Bi ); real = _mm_sub_pd( real, Ai ); imag = _mm_add_pd( imag, Ar ); *Cr = real; *Ci = im...
2008 Nov 26
1
SSE2 code won't compile in VC
...} - sum = _mm_add_sd(sum, (__m128d) _mm_movehl_ps((__m128) sum, (__m128) sum)); + sum = _mm_add_sd(sum, _mm_unpackhi_pd(sum, sum)); _mm_store_sd(&ret, sum); return ret; } @@ -120,7 +120,7 @@ static inline double interpolate_product_double(const float *a, const float *b, sum1 = _mm_mul_pd(f1, sum1); sum2 = _mm_mul_pd(f2, sum2); sum = _mm_add_pd(sum1, sum2); - sum = _mm_add_sd(sum, (__m128d) _mm_movehl_ps((__m128) sum, (__m128) sum)); + sum = _mm_add_sd(sum, _mm_unpackhi_pd(sum, sum)); _mm_store_sd(&ret, sum); return ret; }
2004 Aug 06
0
[PATCH] Make SSE Run Time option.
...> inline void ComplexMultiply( __m128d *Cr, __m128d *Ci, > __m128d Ar, __m128d Ai, > __m128d Br, __m128d Bi ) > { > // http://mathworld.wolfram.com/ComplexMultiplication.html > // Cr = Ar * Br - Ai * Bi > // Ci = Ai * Br + Ar * Bi > > __m128d real = _mm_mul_pd( Ar, Br ); > __m128d imag = _mm_mul_pd( Ai, Br ); > > Ai = _mm_mul_pd( Ai, Bi ); > Ar = _mm_mul_pd( Ar, Bi ); > > real = _mm_sub_pd( real, Ai ); > imag = _mm_add_pd( imag, Ar ); > > *Cr = real; > *Ci = imag; > } > > No permute is required. T...
2004 Aug 06
5
[PATCH] Make SSE Run Time option.
> Personally, I don't think much of PNI. The complex arithmetic stuff they > added sets you up for a lot of permute overhead that is inefficient -- > especially on a processor that is already weak on permute. In my opinion, Actually, the new instructions make it possible to do complex multiplies without the need to permute and separate the add and subtract. The really useful
2009 Oct 26
1
[PATCH] Fix miscompile of SSE resampler
...st float *b, unsigned int len, const spx_uint32_t oversample, float *frac) { int i; - double ret; __m128d sum; __m128d sum1 = _mm_setzero_pd(); __m128d sum2 = _mm_setzero_pd(); @@ -121,8 +114,7 @@ static inline double interpolate_product_double(const float *a, const float *b, sum2 = _mm_mul_pd(f2, sum2); sum = _mm_add_pd(sum1, sum2); sum = _mm_add_sd(sum, _mm_unpackhi_pd(sum, sum)); - _mm_store_sd(&ret, sum); - return ret; + _mm_store_sd(ret, sum); } #endif -- 1.6.4.msysgit.0.19.gd78f4
2008 May 03
2
Resampler (no api)
...dd_pd(sum1, _mm_cvtps_pd(t)); + sum2 = _mm_add_pd(sum2, _mm_cvtps_pd(_mm_movehl_ps(t, t))); + + t = _mm_mul_ps(_mm_load1_ps(a+i+1), _mm_loadu_ps(b+(i+1)*oversample)); + sum1 = _mm_add_pd(sum1, _mm_cvtps_pd(t)); + sum2 = _mm_add_pd(sum2, _mm_cvtps_pd(_mm_movehl_ps(t, t))); + } + sum1 = _mm_mul_pd(f1, sum1); + sum2 = _mm_mul_pd(f2, sum2); + sum = _mm_add_pd(sum1, sum2); + sum = _mm_add_sd(sum, (__m128d) _mm_movehl_ps((__m128) sum, (__m128) sum)); + _mm_store_sd(&ret, sum); + return ret; +} + +#endif Index: libspeex/resample.c =========================================================...
2008 May 03
0
Resampler, memory only variant
...dd_pd(sum1, _mm_cvtps_pd(t)); + sum2 = _mm_add_pd(sum2, _mm_cvtps_pd(_mm_movehl_ps(t, t))); + + t = _mm_mul_ps(_mm_load1_ps(a+i+1), _mm_loadu_ps(b+(i+1)*oversample)); + sum1 = _mm_add_pd(sum1, _mm_cvtps_pd(t)); + sum2 = _mm_add_pd(sum2, _mm_cvtps_pd(_mm_movehl_ps(t, t))); + } + sum1 = _mm_mul_pd(f1, sum1); + sum2 = _mm_mul_pd(f2, sum2); + sum = _mm_add_pd(sum1, sum2); + sum = _mm_add_sd(sum, (__m128d) _mm_movehl_ps((__m128) sum, (__m128) sum)); + _mm_store_sd(&ret, sum); + return ret; +} + +#endif Index: libspeex/resample.c =========================================================...