search for: vmlaq_lane_f32

Displaying 14 results from an estimated 14 matches for "vmlaq_lane_f32".

2014 Dec 19
2
[PATCH v1] cover: armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
Hi, Optimizes celt_pitch_xcorr for ARM NEON floating point. Changes from RFCv3: - celt_neon_intr.c - removed warnings due to not having constant pointers - Put simpler loop to take care of corner cases. Unrolling using intrinsics was not really mapping well to what was done in celt_pitch_xcorr_arm.s - Makefile.am Removed explicit -O3 optimization - test_unit_mathops.c,
2014 Dec 19
2
[PATCH v1] armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
...> + */ > + while (len > 8) { > + yi += 4; > + YY[1] = vld1q_f32(yi); > + yi += 4; > + YY[2] = vld1q_f32(yi); > + > + XX[0] = vld1q_f32(xi); > + xi += 4; > + XX[1] = vld1q_f32(xi); > + xi += 4; > + > + SUMM = vmlaq_lane_f32(SUMM, YY[0], vget_low_f32(XX[0]), 0); > + YEXT[0] = vextq_f32(YY[0], YY[1], 1); > + SUMM = vmlaq_lane_f32(SUMM, YEXT[0], vget_low_f32(XX[0]), 1); > + YEXT[1] = vextq_f32(YY[0], YY[1], 2); > + SUMM = vmlaq_lane_f32(SUMM, YEXT[1], vget_high_f32(XX[0]), 0); > +...
2014 Dec 07
0
[RFC PATCH v2] armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
...e accessed + * hence make sure len > 8 and not len >= 8 + */ + while (len > 8) { + yi += 4; + YY[1] = vld1q_f32(yi); + yi += 4; + YY[2] = vld1q_f32(yi); + + XX[0] = vld1q_f32(xi); + xi += 4; + XX[1] = vld1q_f32(xi); + xi += 4; + + SUMM = vmlaq_lane_f32(SUMM, YY[0], vget_low_f32(XX[0]), 0); + YEXT[0] = vextq_f32(YY[0], YY[1], 1); + SUMM = vmlaq_lane_f32(SUMM, YEXT[0], vget_low_f32(XX[0]), 1); + YEXT[1] = vextq_f32(YY[0], YY[1], 2); + SUMM = vmlaq_lane_f32(SUMM, YEXT[1], vget_high_f32(XX[0]), 0); + YEXT[2] = vextq_f32(YY[0]...
2014 Dec 07
2
[RFC PATCH v2] cover: armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
Hi, Optimizes celt_pitch_xcorr for floating point. Changes from RFCv1: - Rebased on top of commit aad281878: Fix celt_pitch_xcorr_c signature. which got rid of ugly code around CELT_PITCH_XCORR_IMPL passing of "arch" parameter. - Unified with --enable-intrinsics used by x86 - Modified algorithm to be more in-line with algorithm in celt_pitch_xcorr_arm.s Viswanath Puttagunta
2014 Dec 10
0
[RFC PATCH v3] armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
...e accessed + * hence make sure len > 8 and not len >= 8 + */ + while (len > 8) { + yi += 4; + YY[1] = vld1q_f32(yi); + yi += 4; + YY[2] = vld1q_f32(yi); + + XX[0] = vld1q_f32(xi); + xi += 4; + XX[1] = vld1q_f32(xi); + xi += 4; + + SUMM = vmlaq_lane_f32(SUMM, YY[0], vget_low_f32(XX[0]), 0); + YEXT[0] = vextq_f32(YY[0], YY[1], 1); + SUMM = vmlaq_lane_f32(SUMM, YEXT[0], vget_low_f32(XX[0]), 1); + YEXT[1] = vextq_f32(YY[0], YY[1], 2); + SUMM = vmlaq_lane_f32(SUMM, YEXT[1], vget_high_f32(XX[0]), 0); + YEXT[2] = vextq_f32(YY[0]...
2014 Dec 10
2
[RFC PATCH v3] cover: armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
Hi, Optimizes celt_pitch_xcorr for floating point. Changes from RFCv2: - Changes recommended by Timothy for celt_neon_intr.c everything except, left the unrolled loop still unrolled - configure.ac - use AC_LINK_IFELSE instead of AC_COMPILE_IFELSE - Moved compile flags into Makefile.am - OPUS_ARM_NEON_INR --> typo --> OPUS_ARM_NEON_INTR Viswanath Puttagunta (1): armv7:
2014 Dec 07
3
[RFC PATCH v2] cover: armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
From: Viswanath Puttagunta <viswanath.puttagunta at linaro.org> Hi, Optimizes celt_pitch_xcorr for floating point. Changes from RFCv1: - Rebased on top of commit aad281878: Fix celt_pitch_xcorr_c signature. which got rid of ugly code around CELT_PITCH_XCORR_IMPL passing of "arch" parameter. - Unified with --enable-intrinsics used by x86 - Modified algorithm to be more
2014 Dec 19
0
[PATCH v1] armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
...e accessed + * hence make sure len > 8 and not len >= 8 + */ + while (len > 8) { + yi += 4; + YY[1] = vld1q_f32(yi); + yi += 4; + YY[2] = vld1q_f32(yi); + + XX[0] = vld1q_f32(xi); + xi += 4; + XX[1] = vld1q_f32(xi); + xi += 4; + + SUMM = vmlaq_lane_f32(SUMM, YY[0], vget_low_f32(XX[0]), 0); + YEXT[0] = vextq_f32(YY[0], YY[1], 1); + SUMM = vmlaq_lane_f32(SUMM, YEXT[0], vget_low_f32(XX[0]), 1); + YEXT[1] = vextq_f32(YY[0], YY[1], 2); + SUMM = vmlaq_lane_f32(SUMM, YEXT[1], vget_high_f32(XX[0]), 0); + YEXT[2] = vextq_f32(YY[0]...
2014 Dec 09
1
[RFC PATCH v2] armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
...gt; + /* Just unroll the rest of the loop */ If you're not going to special case the last 2+1+1 samples, is there a measurable performance difference compared to simply looping? > + yi++; > + switch(len) { > + case 4: > + XX_2 = vld1_dup_f32(xi++); > + SUMM = vmlaq_lane_f32(SUMM, YY[0], XX_2, 0); > + YY[0] = vld1q_f32(yi++); > + case 3: > + XX_2 = vld1_dup_f32(xi++); > + SUMM = vmlaq_lane_f32(SUMM, YY[0], XX_2, 0); > + YY[0] = vld1q_f32(yi++); > + case 2: > + XX_2 = vld1_dup_f32(xi++); > + SUMM = vmlaq_lane_f32...
2014 Nov 28
2
[RFC PATCHv1] armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
...YY[4], 1); > + YY[2] = vextq_f32(YY[0], YY[4], 2); > + YY[3] = vextq_f32(YY[0], YY[4], 3); > + > + XX[0] = vld1q_dup_f32(xi++); > + XX[1] = vld1q_dup_f32(xi++); > + XX[2] = vld1q_dup_f32(xi++); > + XX[3] = vld1q_dup_f32(xi++); Don't do this. Do a single load and use vmlaq_lane_f32() to multiply by each value. That should cut at least 5 cycles out of this loop. > + > + SUMM[0] = vmlaq_f32(SUMM[0], XX[0], YY[0]); > + SUMM[1] = vmlaq_f32(SUMM[1], XX[1], YY[1]); > + SUMM[2] = vmlaq_f32(SUMM[2], XX[2], YY[2]); > + SUMM[3] = vmlaq_f32(SUMM[3], XX[3], YY[3]); &...
2014 Dec 01
0
[RFC PATCHv1] armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
...extq_f32(YY[0], YY[4], 3); >> + >> + XX[0] = vld1q_dup_f32(xi++); >> + XX[1] = vld1q_dup_f32(xi++); >> + XX[2] = vld1q_dup_f32(xi++); >> + XX[3] = vld1q_dup_f32(xi++); > > Don't do this. Do a single load and use vmlaq_lane_f32() to multiply by > each value. That should cut at least 5 cycles out of this loop. > >> + >> + SUMM[0] = vmlaq_f32(SUMM[0], XX[0], YY[0]); >> + SUMM[1] = vmlaq_f32(SUMM[1], XX[1], YY[1]); >> + SUMM[2] = vmlaq_f32(SUMM[2], XX[2], YY[2...
2017 Mar 23
0
[PATCH] Use NEON intrinsics detection that fails with gcc 4.8.
...gure.ac index fca746f..f945b9a 100644 --- a/configure.ac +++ b/configure.ac @@ -471,7 +471,7 @@ AS_IF([test x"$enable_intrinsics" = x"yes"],[ ]], [[ static float32x4_t A0, A1, SUMM; - SUMM = vmlaq_f32(SUMM, A0, A1); + SUMM = vmlaq_lane_f32(SUMM, A0, vget_low_f32(A1), 0); return (int)vgetq_lane_f32(SUMM, 0); ]] ) -- 2.9.3
2014 Nov 21
4
[RFC PATCHv1] cover: celt_pitch_xcorr: Introduce ARM neon intrinsics
Hello, I received feedback from engineers working on NE10 [1] that it would be better to use NE10 [1] for FFT optimizations for opus use cases. However, these FFT patches are currently in review and haven't been integrated into NE10 yet. While the FFT functions in NE10 are getting baked, I wanted to optimize the celt_pitch_xcorr (floating point only) and use it to introduce ARM NEON
2014 Nov 21
0
[RFC PATCHv1] armv7: celt_pitch_xcorr: Introduce ARM neon intrinsics
...], XX[1], YY[1]); + SUMM[2] = vmlaq_f32(SUMM[2], XX[2], YY[2]); + SUMM[3] = vmlaq_f32(SUMM[3], XX[3], YY[3]); + YY[0] = YY[4]; + } + + /* Handle remaining values max iterations = 3 */ + for (j = 0; j < cr; j++) { + YY[0] = vld1q_f32(yi++); + XX_2 = vld1_lane_f32(xi++, XX_2, 0); + SUMM[0] = vmlaq_lane_f32(SUMM[0], YY[0], XX_2, 0); + } + + SUMM[0] = vaddq_f32(SUMM[0], SUMM[1]); + SUMM[2] = vaddq_f32(SUMM[2], SUMM[3]); + SUMM[0] = vaddq_f32(SUMM[0], SUMM[2]); + + vst1q_f32(sum, SUMM[0]); +} + +void celt_pitch_xcorr_float_neon(const opus_val16 *_x, const opus_val16 *_y, + opus_val32 *xcorr, int len,...