Displaying 20 results from an estimated 20 matches for "vld1q_s32".
2015 Dec 20
2
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
Jonathan Lennox wrote:
> +opus_int32 silk_noise_shape_quantizer_short_prediction_neon(const opus_int32 *buf32, const opus_int32 *coef32)
> +{
> + int32x4_t coef0 = vld1q_s32(coef32);
> + int32x4_t coef1 = vld1q_s32(coef32 + 4);
> + int32x4_t coef2 = vld1q_s32(coef32 + 8);
> + int32x4_t coef3 = vld1q_s32(coef32 + 12);
> +
> + int32x4_t a0 = vld1q_s32(buf32 - 15);
> + int32x4_t a1 = vld1q_s32(buf32 - 11);
> + int32x4_t a2 = vld1q_s3...
2016 Aug 23
0
[PATCH 8/8] Optimize silk_NSQ_del_dec() for ARM NEON
...RDmin_Q10 = psDelDec->RD_Q10[ i ];
+ Winner_ind = i;
+ }
+ }
+ psDelDec->RD_Q10[ Winner_ind ] -= ( silk_int32_MAX >> 4 );
+ RD_Q10_s32x4 = vld1q_s32( psDelDec->RD_Q10 );
+ RD_Q10_s32x4 = vaddq_s32( RD_Q10_s32x4, vdupq_n_s32( silk_int32_MAX >> 4 ) );
+ vst1q_s32( psDelDec->RD_Q10, RD_Q10_s32x4 );
+
+ /* Copy final part of signals from winner state to output and long...
2016 Aug 23
2
[PATCH 7/8] Update NSQ_LPC_BUF_LENGTH macro.
NSQ_LPC_BUF_LENGTH is independent of DECISION_DELAY.
---
silk/define.h | 4 ----
1 file changed, 4 deletions(-)
diff --git a/silk/define.h b/silk/define.h
index 781cfdc..1286048 100644
--- a/silk/define.h
+++ b/silk/define.h
@@ -173,11 +173,7 @@ extern "C"
#define MAX_MATRIX_SIZE MAX_LPC_ORDER /* Max of LPC Order and LTP order */
-#if( MAX_LPC_ORDER >
2015 Dec 21
0
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
> On Dec 19, 2015, at 10:07 PM, Timothy B. Terriberry <tterribe at xiph.org> wrote:
>
> Jonathan Lennox wrote:
>> +opus_int32 silk_noise_shape_quantizer_short_prediction_neon(const opus_int32 *buf32, const opus_int32 *coef32)
>> +{
>> + int32x4_t coef0 = vld1q_s32(coef32);
>> + int32x4_t coef1 = vld1q_s32(coef32 + 4);
>> + int32x4_t coef2 = vld1q_s32(coef32 + 8);
>> + int32x4_t coef3 = vld1q_s32(coef32 + 12);
>> +
>> + int32x4_t a0 = vld1q_s32(buf32 - 15);
>> + int32x4_t a1 = vld1q_s32(buf32 - 11);
>>...
2016 Jul 14
6
Several patches of ARM NEON optimization
I rebased my previous 3 patches to the current master with minor changes.
Patches 1 to 3 replace all my previous submitted patches.
Patches 4 and 5 are new.
Thanks,
Linfeng Zhang
2016 Jul 01
1
silk_warped_autocorrelation_FIX() NEON optimization
Hi all,
I'm sending patch "Optimize silk_warped_autocorrelation_FIX() for ARM NEON" in an separate email.
It is based on Tim’s aarch64v8 branch https://git.xiph.org/?p=users/tterribe/opus.git;a=shortlog;h=refs/heads/aarch64v8
Thanks for your comments.
Linfeng
2015 Nov 21
12
[Aarch64 v2 00/18] Patches to enable Aarch64 (version 2)
As promised, here's a re-send of all my Aarch64 patches, following
comments by John Ridges.
Note that they actually affect more than just Aarch64 -- other than
the ones specifically guarded by AARCH64_NEON defines, the Neon
intrinsics all also apply on armv7; and the OPUS_FAST_INT64 patches
apply on any 64-bit machine.
The patches should largely be independent and independently useful,
other
2015 Aug 05
0
[PATCH 6/8] Add Neon intrinsics for Silk noise shape quantization.
...t;main.h"
+#include "stack_alloc.h"
+#include "NSQ.h"
+#include "celt/cpu_support.h"
+#include "celt/arm/armcpu.h"
+
+opus_int32 silk_noise_shape_quantizer_short_prediction_neon(const opus_int32 *buf32, const opus_int32 *coef32)
+{
+ int32x4_t coef0 = vld1q_s32(coef32);
+ int32x4_t coef1 = vld1q_s32(coef32 + 4);
+ int32x4_t coef2 = vld1q_s32(coef32 + 8);
+ int32x4_t coef3 = vld1q_s32(coef32 + 12);
+
+ int32x4_t a0 = vld1q_s32(buf32 - 15);
+ int32x4_t a1 = vld1q_s32(buf32 - 11);
+ int32x4_t a2 = vld1q_s32(buf32 - 7);
+ int32x4_t a3 = v...
2015 Nov 21
0
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
...t;main.h"
+#include "stack_alloc.h"
+#include "NSQ.h"
+#include "celt/cpu_support.h"
+#include "celt/arm/armcpu.h"
+
+opus_int32 silk_noise_shape_quantizer_short_prediction_neon(const opus_int32 *buf32, const opus_int32 *coef32)
+{
+ int32x4_t coef0 = vld1q_s32(coef32);
+ int32x4_t coef1 = vld1q_s32(coef32 + 4);
+ int32x4_t coef2 = vld1q_s32(coef32 + 8);
+ int32x4_t coef3 = vld1q_s32(coef32 + 12);
+
+ int32x4_t a0 = vld1q_s32(buf32 - 15);
+ int32x4_t a1 = vld1q_s32(buf32 - 11);
+ int32x4_t a2 = vld1q_s32(buf32 - 7);
+ int32x4_t a3 = v...
2015 Dec 23
6
[AArch64 neon intrinsics v4 0/5] Rework Neon intrinsic code for Aarch64 patchset
Following Tim's comments, here are my reworked patches for the Neon intrinsic function patches of
of my Aarch64 patchset, i.e. replacing patches 5-8 of the v2 series. Patches 1-4 and 9-18 of the
old series still apply unmodified.
The one new (as opposed to changed) patch is the first one in this series, to add named constants
for the ARM architecture variants.
There are also some minor code
2015 Nov 07
12
[Aarch64 00/11] Patches to enable Aarch64 (arm64) optimizations, rebased to current master.
Here are my aarch64 patches rebased to the current tip of Opus master.
They're largely the same as my previous patch set, with the addition
of the final one (the Neon fixed-point implementation of
xcorr_kernel). This replaces Viswanath's Neon fixed-point
celt_pitch_xcorr, since xcorr_kernel is used in celt_fir and celt_iir
as well.
These have been tested for correctness under qemu
2015 Aug 05
8
[PATCH 0/8] Patches for arm64 (aarch64) support
This sequence of patches provides arm64 support for Opus. Tested on
iOS, Android, and Ubuntu 14.04.
The patch sequence was written on top of Viswanath Puttagunta's Ne10
patches, but all but the second ("Reorganize pitch_arm.h") should, I
think, apply independently of it. It does depends on my previous
intrinsics configury reorganization, however.
Comments welcome.
With this and
2013 Jun 07
2
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
...corr_kernel for fixed-point ARM NEON (sorry I
don't have a floating-point version, but I only use fixed-point opus in
ARM):
#include <arm_neon.h>
static inline void xcorr_kernel(const opus_val16 *x, const opus_val16
*y, opus_val32 sum[4], int len)
{
int j;
int32x4_t xsum1 = vld1q_s32(sum);
int32x4_t xsum2 = vdupq_n_s32(0);
for (j = 0; j < len-1; j += 2) {
xsum1 = vmlal_s16(xsum1,vdup_n_s16(*x++),vld1_s16(y++));
xsum2 = vmlal_s16(xsum2,vdup_n_s16(*x++),vld1_s16(y++));
}
if (j < len) {
xsum1 = vmlal_s16(xsum1,vdup_n_s16(*x),vl...
2015 Aug 05
0
[PATCH 7/8] Add Neon intrinsics for Silk noise shape feedback loop.
...ane_s32(d, 0);
return out;
}
+
+
+opus_int32 silk_NSQ_noise_shape_feedback_loop_neon(const opus_int32 *data0, opus_int32 *data1, const opus_int16 *coef, opus_int order)
+{
+ opus_int32 out;
+ if (order == 8)
+ {
+ int32x4_t a00 = vdupq_n_s32(data0[0]);
+ int32x4_t a01 = vld1q_s32(data1); // data1[0] ... [3]
+
+ int32x4_t a0 = vextq_s32 (a00, a01, 3); // data0[0] data1[0] ...[2]
+ int32x4_t a1 = vld1q_s32(data1 + 3); // data1[3] ... [6]
+
+ int16x8_t coef16 = vld1q_s16(coef);
+ int32x4_t coef0 = vmovl_s16(vget_low_s16(coef16));
+ int32x4_...
2015 Nov 21
0
[Aarch64 v2 06/18] Add Neon intrinsics for Silk noise shape feedback loop.
...ane_s32(d, 0);
return out;
}
+
+
+opus_int32 silk_NSQ_noise_shape_feedback_loop_neon(const opus_int32 *data0, opus_int32 *data1, const opus_int16 *coef, opus_int order)
+{
+ opus_int32 out;
+ if (order == 8)
+ {
+ int32x4_t a00 = vdupq_n_s32(data0[0]);
+ int32x4_t a01 = vld1q_s32(data1); // data1[0] ... [3]
+
+ int32x4_t a0 = vextq_s32 (a00, a01, 3); // data0[0] data1[0] ...[2]
+ int32x4_t a1 = vld1q_s32(data1 + 3); // data1[3] ... [6]
+
+ int16x8_t coef16 = vld1q_s16(coef);
+ int32x4_t coef0 = vmovl_s16(vget_low_s16(coef16));
+ int32x4_...
2013 Jun 07
0
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
...; don't have a floating-point version, but I only use fixed-point opus in
> ARM):
>
> #include <arm_neon.h>
>
> static inline void xcorr_kernel(const opus_val16 *x, const opus_val16
> *y, opus_val32 sum[4], int len)
> {
> int j;
> int32x4_t xsum1 = vld1q_s32(sum);
> int32x4_t xsum2 = vdupq_n_s32(0);
>
> for (j = 0; j < len-1; j += 2) {
> xsum1 = vmlal_s16(xsum1,vdup_n_s16(*x++),vld1_s16(y++));
> xsum2 = vmlal_s16(xsum2,vdup_n_s16(*x++),vld1_s16(y++));
> }
> if (j < len) {
> x...
2015 Nov 21
0
[Aarch64 v2 08/18] Add Neon fixed-point implementation of xcorr_kernel.
...c
+++ b/celt/arm/celt_neon_intr.c
@@ -37,7 +37,66 @@
#include <arm_neon.h>
#include "../pitch.h"
-#if !defined(FIXED_POINT)
+#if defined(FIXED_POINT)
+void xcorr_kernel_neon_fixed(const opus_val16 * x, const opus_val16 * y, opus_val32 sum[4], int len)
+{
+ int j;
+ int32x4_t a = vld1q_s32(sum);
+ //Load y[0...3]
+ //This requires len>0 to always be valid (which we assert in the C code).
+ int16x4_t y0 = vld1_s16(y);
+ y += 4;
+
+ for (j = 0; j + 8 <= len; j += 8)
+ {
+ // Load x[0...7]
+ int16x8_t xx = vld1q_s16(x);
+ int16x4_t x0 = vget_low_s16(xx);
+ int16x4_t x4 = vget_...
2013 Jun 07
1
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
...offer up another intrinsic version of the
NEON xcorr_kernel that is almost identical to the SSE version, and more
in line with Mr. Zanelli's code:
static inline void xcorr_kernel_neon(const opus_val16 *x, const
opus_val16 *y, opus_val32 sum[4], int len)
{
int j;
int32x4_t xsum1 = vld1q_s32(sum);
int32x4_t xsum2 = vdupq_n_s32(0);
for (j = 0; j < len-3; j += 4) {
int16x4_t x0 = vld1_s16(x+j);
int16x4_t y0 = vld1_s16(y+j);
int16x4_t y3 = vld1_s16(y+j+3);
int16x4_t y4 = vext_s16(y3,y3,1);
xsum1 = vmlal_s16(xsum1,vdup_lane_s16(x0...
2013 Jun 07
2
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
Hi JM,
I have no doubt that Mr. Zanelli's NEON code is faster, since hand tuned
assembly is bound to be faster than using intrinsics. However I notice
that his code can also read past the y buffer.
Cheers,
--John
On 6/6/2013 9:22 PM, Jean-Marc Valin wrote:
> Hi John,
>
> Thanks for the two fixes. They're in git now. Your SSE version seems to
> also be slightly faster than
2013 Jun 10
0
opus Digest, Vol 53, Issue 2
...me offer up another intrinsic version of the
NEON xcorr_kernel that is almost identical to the SSE version, and more
in line with Mr. Zanelli's code:
static inline void xcorr_kernel_neon(const opus_val16 *x, const
opus_val16 *y, opus_val32 sum[4], int len)
{
int j;
int32x4_t xsum1 = vld1q_s32(sum);
int32x4_t xsum2 = vdupq_n_s32(0);
for (j = 0; j < len-3; j += 4) {
int16x4_t x0 = vld1_s16(x+j);
int16x4_t y0 = vld1_s16(y+j);
int16x4_t y3 = vld1_s16(y+j+3);
int16x4_t y4 = vext_s16(y3,y3,1);
xsum1 = vmlal_s16(xsum1,vdup_lane_s16(x0...