Displaying 18 results from an estimated 18 matches for "vmlal_s16".
2013 Jun 07
1
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
...int32x4_t xsum1 = vld1q_s32(sum);
int32x4_t xsum2 = vdupq_n_s32(0);
for (j = 0; j < len-3; j += 4) {
int16x4_t x0 = vld1_s16(x+j);
int16x4_t y0 = vld1_s16(y+j);
int16x4_t y3 = vld1_s16(y+j+3);
int16x4_t y4 = vext_s16(y3,y3,1);
xsum1 = vmlal_s16(xsum1,vdup_lane_s16(x0,0),y0);
xsum2 = vmlal_s16(xsum2,vdup_lane_s16(x0,1),vext_s16(y0,y4,1));
xsum1 = vmlal_s16(xsum1,vdup_lane_s16(x0,2),vext_s16(y0,y4,2));
xsum2 = vmlal_s16(xsum2,vdup_lane_s16(x0,3),y3);
}
if (j < len) {
xsum1 = vmlal_s16(xsum1,v...
2013 Jun 07
2
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
Hi JM,
I have no doubt that Mr. Zanelli's NEON code is faster, since hand tuned
assembly is bound to be faster than using intrinsics. However I notice
that his code can also read past the y buffer.
Cheers,
--John
On 6/6/2013 9:22 PM, Jean-Marc Valin wrote:
> Hi John,
>
> Thanks for the two fixes. They're in git now. Your SSE version seems to
> also be slightly faster than
2015 May 15
0
[RFC V3 5/8] aarch64: celt_pitch_xcorr: Fixed point intrinsics
...+ SUMM = vdupq_n_s32(0);
+
+ /* Work on 16 values per iteration */
+ while (len >= 16) {
+ XX[0] = vld1q_s16(xi);
+ xi += 8;
+ XX[1] = vld1q_s16(xi);
+ xi += 8;
+
+ YY[0] = vld1q_s16(yi);
+ yi += 8;
+ YY[1] = vld1q_s16(yi);
+ yi += 8;
+
+ SUMM = vmlal_s16(SUMM, vget_low_s16(YY[0]), vget_low_s16(XX[0]));
+ SUMM = vmlal_s16(SUMM, vget_high_s16(YY[0]), vget_high_s16(XX[0]));
+ SUMM = vmlal_s16(SUMM, vget_low_s16(YY[1]), vget_low_s16(XX[1]));
+ SUMM = vmlal_s16(SUMM, vget_high_s16(YY[1]), vget_high_s16(XX[1]));
+
+ len -= 16;
+ }
+...
2015 May 08
0
[[RFC PATCH v2]: Ne10 fft fixed and previous 5/8] aarch64: celt_pitch_xcorr: Fixed point intrinsics
...+ SUMM = vdupq_n_s32(0);
+
+ /* Work on 16 values per iteration */
+ while (len >= 16) {
+ XX[0] = vld1q_s16(xi);
+ xi += 8;
+ XX[1] = vld1q_s16(xi);
+ xi += 8;
+
+ YY[0] = vld1q_s16(yi);
+ yi += 8;
+ YY[1] = vld1q_s16(yi);
+ yi += 8;
+
+ SUMM = vmlal_s16(SUMM, vget_low_s16(YY[0]), vget_low_s16(XX[0]));
+ SUMM = vmlal_s16(SUMM, vget_high_s16(YY[0]), vget_high_s16(XX[0]));
+ SUMM = vmlal_s16(SUMM, vget_low_s16(YY[1]), vget_low_s16(XX[1]));
+ SUMM = vmlal_s16(SUMM, vget_high_s16(YY[1]), vget_high_s16(XX[1]));
+
+ len -= 16;
+ }
+...
2013 Jun 07
2
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
...opus in
ARM):
#include <arm_neon.h>
static inline void xcorr_kernel(const opus_val16 *x, const opus_val16
*y, opus_val32 sum[4], int len)
{
int j;
int32x4_t xsum1 = vld1q_s32(sum);
int32x4_t xsum2 = vdupq_n_s32(0);
for (j = 0; j < len-1; j += 2) {
xsum1 = vmlal_s16(xsum1,vdup_n_s16(*x++),vld1_s16(y++));
xsum2 = vmlal_s16(xsum2,vdup_n_s16(*x++),vld1_s16(y++));
}
if (j < len) {
xsum1 = vmlal_s16(xsum1,vdup_n_s16(*x),vld1_s16(y));
}
vst1q_s32(sum,vaddq_s32(xsum1,xsum2));
}
Cheers,
John Ridges
2013 Jun 10
0
opus Digest, Vol 53, Issue 2
...int32x4_t xsum1 = vld1q_s32(sum);
int32x4_t xsum2 = vdupq_n_s32(0);
for (j = 0; j < len-3; j += 4) {
int16x4_t x0 = vld1_s16(x+j);
int16x4_t y0 = vld1_s16(y+j);
int16x4_t y3 = vld1_s16(y+j+3);
int16x4_t y4 = vext_s16(y3,y3,1);
xsum1 = vmlal_s16(xsum1,vdup_lane_s16(x0,0),y0);
xsum2 = vmlal_s16(xsum2,vdup_lane_s16(x0,1),vext_s16(y0,y4,1));
xsum1 = vmlal_s16(xsum1,vdup_lane_s16(x0,2),vext_s16(y0,y4,2));
xsum2 = vmlal_s16(xsum2,vdup_lane_s16(x0,3),y3);
}
if (j < len) {
xsum1 = vmlal_s16(xsum1,v...
2013 Jun 07
0
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
...> static inline void xcorr_kernel(const opus_val16 *x, const opus_val16
> *y, opus_val32 sum[4], int len)
> {
> int j;
> int32x4_t xsum1 = vld1q_s32(sum);
> int32x4_t xsum2 = vdupq_n_s32(0);
>
> for (j = 0; j < len-1; j += 2) {
> xsum1 = vmlal_s16(xsum1,vdup_n_s16(*x++),vld1_s16(y++));
> xsum2 = vmlal_s16(xsum2,vdup_n_s16(*x++),vld1_s16(y++));
> }
> if (j < len) {
> xsum1 = vmlal_s16(xsum1,vdup_n_s16(*x),vld1_s16(y));
> }
> vst1q_s32(sum,vaddq_s32(xsum1,xsum2));
> }
>
>
&g...
2016 Sep 13
4
[PATCH 12/15] Replace call of celt_inner_prod_c() (step 1)
Should call celt_inner_prod().
---
celt/bands.c | 7 ++++---
celt/bands.h | 2 +-
celt/celt_encoder.c | 6 +++---
celt/pitch.c | 2 +-
src/opus_multistream_encoder.c | 2 +-
5 files changed, 10 insertions(+), 9 deletions(-)
diff --git a/celt/bands.c b/celt/bands.c
index bbe8a4c..1ab24aa 100644
--- a/celt/bands.c
+++ b/celt/bands.c
2015 Nov 21
0
[Aarch64 v2 08/18] Add Neon fixed-point implementation of xcorr_kernel.
...y0, y4, 3);
+ int16x4_t y7 = vext_s16(y4, y8, 3);
+ int32x4_t a6 = vmlal_lane_s16(a5, y3, x0, 3);
+ int32x4_t a7 = vmlal_lane_s16(a6, y7, x4, 3);
+
+ y0 = y8;
+ a = a7;
+ x += 8;
+ y += 8;
+ }
+
+ for (; j < len; j++)
+ {
+ int16x4_t x0 = vld1_dup_s16(x); //load next x
+ int32x4_t a0 = vmlal_s16(a, y0, x0);
+
+ int16x4_t y4 = vld1_dup_s16(y); //load next y
+ y0 = vext_s16(y0, y4, 1);
+ a = a0;
+ x++;
+ y++;
+ }
+
+ vst1q_s32(sum, a);
+}
+
+#else
/*
* Function: xcorr_kernel_neon_float
* ---------------------------------
diff --git a/celt/arm/pitch_arm.h b/celt/arm/pitch_arm.h
ind...
2015 Mar 31
6
[RFC PATCH v1 0/5] aarch64: celt_pitch_xcorr: Fixed point series
Hi Timothy,
As I mentioned earlier [1], I now fixed compile issues
with fixed point and resubmitting the patch.
I also have new patch that does intrinsics optimizations
for celt_pitch_xcorr targetting aarch64.
You can find my latest work-in-progress branch at [2]
For reference, you can use the Ne10 pre-built libraries
at [3]
Note that I am working with Phil at ARM to get my patch at [4]
2015 May 08
8
[RFC PATCH v2]: Ne10 fft fixed and previous 0/8]
Hi All,
As per Timothy's suggestion, disabling mdct_forward
for fixed point. Only effects
armv7,armv8: Extend fixed fft NE10 optimizations to mdct
Rest of patches are same as in [1]
For reference, latest wip code for opus is at [2]
Still working with NE10 team at ARM to get corner cases of
mdct_forward. Will update with another patch
when issue in NE10 gets fixed.
Regards,
Vish
[1]:
2015 May 15
11
[RFC V3 0/8] Ne10 fft fixed and previous
Hi All,
Changes from RFC v2 [1]
armv7,armv8: Extend fixed fft NE10 optimizations to mdct
- Overflow issue fixed by Phil at ARM. Ne10 wip at [2]. Should be upstream soon.
- So, re-enabled using fixed fft for mdct_forward which was disabled in RFCv2
armv7,armv8: Optimize fixed point fft using NE10 library
- Thanks to Jonathan Lennox, fixed some build fixes on iOS and some copy-paste errors
Rest
2015 Apr 28
10
[RFC PATCH v1 0/8] Ne10 fft fixed and previous
Hello Timothy / Jean-Marc / opus-dev,
This patch series is follow up on work I posted on [1].
In addition to what was posted on [1], this patch series mainly
integrates Fixed point FFT implementations in NE10 library into opus.
You can view my opus wip code at [2].
Note that while I found some issues both with the NE10 library(fixed fft)
and with Linaro toolchain (armv8 intrinsics), the work
2016 Aug 23
0
[PATCH 8/8] Optimize silk_NSQ_del_dec() for ARM NEON
...tmp2_s16x4 );
+ rd1_Q10_s32x4 = vmull_s16( tmp1_s16x4, vdup_n_s16( Lambda_Q10 ) );
+ rd2_Q10_s32x4 = vmull_s16( tmp2_s16x4, vdup_n_s16( Lambda_Q10 ) );
+ }
+
+ rr_Q10_s16x4 = vsub_s16( r_Q10_s16x4, q1_Q10_s16x4 );
+ rd1_Q10_s32x4 = vmlal_s16( rd1_Q10_s32x4, rr_Q10_s16x4, rr_Q10_s16x4 );
+ rd1_Q10_s32x4 = vshrq_n_s32( rd1_Q10_s32x4, 10 );
+
+ rr_Q10_s16x4 = vsub_s16( r_Q10_s16x4, q2_Q10_s16x4 );
+ rd2_Q10_s32x4 = vmlal_s16( rd2_Q10_s32x4, rr_Q10_s16x4, rr_Q10_s16x4 );
+ rd2_Q10_s32x4 = vshrq_...
2016 Aug 23
2
[PATCH 7/8] Update NSQ_LPC_BUF_LENGTH macro.
NSQ_LPC_BUF_LENGTH is independent of DECISION_DELAY.
---
silk/define.h | 4 ----
1 file changed, 4 deletions(-)
diff --git a/silk/define.h b/silk/define.h
index 781cfdc..1286048 100644
--- a/silk/define.h
+++ b/silk/define.h
@@ -173,11 +173,7 @@ extern "C"
#define MAX_MATRIX_SIZE MAX_LPC_ORDER /* Max of LPC Order and LTP order */
-#if( MAX_LPC_ORDER >
2015 Dec 23
6
[AArch64 neon intrinsics v4 0/5] Rework Neon intrinsic code for Aarch64 patchset
Following Tim's comments, here are my reworked patches for the Neon intrinsic function patches of
of my Aarch64 patchset, i.e. replacing patches 5-8 of the v2 series. Patches 1-4 and 9-18 of the
old series still apply unmodified.
The one new (as opposed to changed) patch is the first one in this series, to add named constants
for the ARM architecture variants.
There are also some minor code
2015 Nov 21
12
[Aarch64 v2 00/18] Patches to enable Aarch64 (version 2)
As promised, here's a re-send of all my Aarch64 patches, following
comments by John Ridges.
Note that they actually affect more than just Aarch64 -- other than
the ones specifically guarded by AARCH64_NEON defines, the Neon
intrinsics all also apply on armv7; and the OPUS_FAST_INT64 patches
apply on any 64-bit machine.
The patches should largely be independent and independently useful,
other
2015 Nov 07
12
[Aarch64 00/11] Patches to enable Aarch64 (arm64) optimizations, rebased to current master.
Here are my aarch64 patches rebased to the current tip of Opus master.
They're largely the same as my previous patch set, with the addition
of the final one (the Neon fixed-point implementation of
xcorr_kernel). This replaces Viswanath's Neon fixed-point
celt_pitch_xcorr, since xcorr_kernel is used in celt_fir and celt_iir
as well.
These have been tested for correctness under qemu