search for: int64x2_t

Displaying 20 results from an estimated 26 matches for "int64x2_t".

2015 Dec 20
2
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
..._t coef2 = vld1q_s32(coef32 + 8); > + int32x4_t coef3 = vld1q_s32(coef32 + 12); > + > + int32x4_t a0 = vld1q_s32(buf32 - 15); > + int32x4_t a1 = vld1q_s32(buf32 - 11); > + int32x4_t a2 = vld1q_s32(buf32 - 7); > + int32x4_t a3 = vld1q_s32(buf32 - 3); > + > + int64x2_t b0 = vmull_s32(vget_low_s32(a0), vget_low_s32(coef0)); > + int64x2_t b1 = vmlal_s32(b0, vget_high_s32(a0), vget_high_s32(coef0)); > + int64x2_t b2 = vmlal_s32(b1, vget_low_s32(a1), vget_low_s32(coef1)); > + int64x2_t b3 = vmlal_s32(b2, vget_high_s32(a1), vget_high_s32(coef1)); &gt...
2015 Dec 21
0
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
...; >> + int32x4_t coef3 = vld1q_s32(coef32 + 12); >> + >> + int32x4_t a0 = vld1q_s32(buf32 - 15); >> + int32x4_t a1 = vld1q_s32(buf32 - 11); >> + int32x4_t a2 = vld1q_s32(buf32 - 7); >> + int32x4_t a3 = vld1q_s32(buf32 - 3); >> + >> + int64x2_t b0 = vmull_s32(vget_low_s32(a0), vget_low_s32(coef0)); >> + int64x2_t b1 = vmlal_s32(b0, vget_high_s32(a0), vget_high_s32(coef0)); >> + int64x2_t b2 = vmlal_s32(b1, vget_low_s32(a1), vget_low_s32(coef1)); >> + int64x2_t b3 = vmlal_s32(b2, vget_high_s32(a1), vget_high_s32(...
2018 May 08
2
Pointer size bugs when compiling for android arm64?
...ncompatible-pointer-types]     corr_QC_s64x2[ 0 ] = vld1q_s64( corr_QC + offset + 0 );                                     ^~~~~~~~~~~~~~~~~~~~ /Users/andrewl/android/toolchain-r16b-arm64-v8a/lib64/clang/5.0.300080/include/arm_neon.h:7628:46: note: expanded from macro 'vld1q_s64'   __ret = (int64x2_t) __builtin_neon_vld1q_v(__p0, 35); \                                              ^~~~ silk/fixed/arm/warped_autocorrelation_FIX_neon_intr.c:44:37: warning: incompatible pointer types assigning to 'const long *' from 'long long *' [-Wincompatible-pointer-types]     corr_QC_s64x2[ 1...
2015 Nov 21
12
[Aarch64 v2 00/18] Patches to enable Aarch64 (version 2)
As promised, here's a re-send of all my Aarch64 patches, following comments by John Ridges. Note that they actually affect more than just Aarch64 -- other than the ones specifically guarded by AARCH64_NEON defines, the Neon intrinsics all also apply on armv7; and the OPUS_FAST_INT64 patches apply on any 64-bit machine. The patches should largely be independent and independently useful, other
2015 Aug 05
0
[PATCH 6/8] Add Neon intrinsics for Silk noise shape quantization.
...f1 = vld1q_s32(coef32 + 4); + int32x4_t coef2 = vld1q_s32(coef32 + 8); + int32x4_t coef3 = vld1q_s32(coef32 + 12); + + int32x4_t a0 = vld1q_s32(buf32 - 15); + int32x4_t a1 = vld1q_s32(buf32 - 11); + int32x4_t a2 = vld1q_s32(buf32 - 7); + int32x4_t a3 = vld1q_s32(buf32 - 3); + + int64x2_t b0 = vmull_s32(vget_low_s32(a0), vget_low_s32(coef0)); + int64x2_t b1 = vmlal_s32(b0, vget_high_s32(a0), vget_high_s32(coef0)); + int64x2_t b2 = vmlal_s32(b1, vget_low_s32(a1), vget_low_s32(coef1)); + int64x2_t b3 = vmlal_s32(b2, vget_high_s32(a1), vget_high_s32(coef1)); + int64x2_t b4...
2015 Nov 21
0
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
...f1 = vld1q_s32(coef32 + 4); + int32x4_t coef2 = vld1q_s32(coef32 + 8); + int32x4_t coef3 = vld1q_s32(coef32 + 12); + + int32x4_t a0 = vld1q_s32(buf32 - 15); + int32x4_t a1 = vld1q_s32(buf32 - 11); + int32x4_t a2 = vld1q_s32(buf32 - 7); + int32x4_t a3 = vld1q_s32(buf32 - 3); + + int64x2_t b0 = vmull_s32(vget_low_s32(a0), vget_low_s32(coef0)); + int64x2_t b1 = vmlal_s32(b0, vget_high_s32(a0), vget_high_s32(coef0)); + int64x2_t b2 = vmlal_s32(b1, vget_low_s32(a1), vget_low_s32(coef1)); + int64x2_t b3 = vmlal_s32(b2, vget_high_s32(a1), vget_high_s32(coef1)); + int64x2_t b4...
2015 Nov 20
2
[Aarch64 00/11] Patches to enable Aarch64
> On Nov 19, 2015, at 5:47 PM, John Ridges <jridges at masque.com> wrote: > > Any speedup from the intrinsics may just be swamped by the rest of the encode/decode process. But I think you really want SIG2WORD16 to be (vqmovns_s32(PSHR32((x), SIG_SHIFT))) Yes, you?re right. I forgot to run the vectors under qemu with my previous version (oh, the embarrassment!) Fixed forthcoming
2015 Aug 05
0
[PATCH 7/8] Add Neon intrinsics for Silk noise shape feedback loop.
...vextq_s32 (a00, a01, 3); // data0[0] data1[0] ...[2] + int32x4_t a1 = vld1q_s32(data1 + 3); // data1[3] ... [6] + + int16x8_t coef16 = vld1q_s16(coef); + int32x4_t coef0 = vmovl_s16(vget_low_s16(coef16)); + int32x4_t coef1 = vmovl_s16(vget_high_s16(coef16)); + + int64x2_t b0 = vmull_s32(vget_low_s32(a0), vget_low_s32(coef0)); + int64x2_t b1 = vmlal_s32(b0, vget_high_s32(a0), vget_high_s32(coef0)); + int64x2_t b2 = vmlal_s32(b1, vget_low_s32(a1), vget_low_s32(coef1)); + int64x2_t b3 = vmlal_s32(b2, vget_high_s32(a1), vget_high_s32(coef1)); + +...
2015 Nov 21
0
[Aarch64 v2 06/18] Add Neon intrinsics for Silk noise shape feedback loop.
...vextq_s32 (a00, a01, 3); // data0[0] data1[0] ...[2] + int32x4_t a1 = vld1q_s32(data1 + 3); // data1[3] ... [6] + + int16x8_t coef16 = vld1q_s16(coef); + int32x4_t coef0 = vmovl_s16(vget_low_s16(coef16)); + int32x4_t coef1 = vmovl_s16(vget_high_s16(coef16)); + + int64x2_t b0 = vmull_s32(vget_low_s32(a0), vget_low_s32(coef0)); + int64x2_t b1 = vmlal_s32(b0, vget_high_s32(a0), vget_high_s32(coef0)); + int64x2_t b2 = vmlal_s32(b1, vget_low_s32(a1), vget_low_s32(coef1)); + int64x2_t b3 = vmlal_s32(b2, vget_high_s32(a1), vget_high_s32(coef1)); + +...
2015 Nov 23
1
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
...ly hate to bring this up this late in the game, but I just noticed that your NEON code doesn't use any of the "high" intrinsics for ARM64, e.g. instead of: int32x4_t coef1 = vmovl_s16(vget_high_s16(coef16)); you could use: int32x4_t coef1 = vmovl_high_s16(coef16); and instead of: int64x2_t b1 = vmlal_s32(b0, vget_high_s32(a0), vget_high_s32(coef0)); you could use: int64x2_t b1 = vmlal_high_s32(b0, a0, coef0); and instead of: int64x1_t c = vadd_s64(vget_low_s64(b3), vget_high_s64(b3)); int64x1_t cS = vshr_n_s64(c, 16); int32x2_t d = vreinterpret_s32_s64(cS); out = vget_lane_s32(d,...
2015 Aug 05
8
[PATCH 0/8] Patches for arm64 (aarch64) support
This sequence of patches provides arm64 support for Opus. Tested on iOS, Android, and Ubuntu 14.04. The patch sequence was written on top of Viswanath Puttagunta's Ne10 patches, but all but the second ("Reorganize pitch_arm.h") should, I think, apply independently of it. It does depends on my previous intrinsics configury reorganization, however. Comments welcome. With this and
2016 Jul 14
6
Several patches of ARM NEON optimization
I rebased my previous 3 patches to the current master with minor changes. Patches 1 to 3 replace all my previous submitted patches. Patches 4 and 5 are new. Thanks, Linfeng Zhang
2015 Nov 07
12
[Aarch64 00/11] Patches to enable Aarch64 (arm64) optimizations, rebased to current master.
Here are my aarch64 patches rebased to the current tip of Opus master. They're largely the same as my previous patch set, with the addition of the final one (the Neon fixed-point implementation of xcorr_kernel). This replaces Viswanath's Neon fixed-point celt_pitch_xcorr, since xcorr_kernel is used in celt_fir and celt_iir as well. These have been tested for correctness under qemu
2015 Nov 23
0
[Aarch64 v2 05/18] Add Neon intrinsics for Silk noise shape quantization.
...hate to bring this up this late in the game, but I just noticed that your NEON code doesn't use any of the "high" intrinsics for ARM64, e.g. instead of: int32x4_t coef1 = vmovl_s16(vget_high_s16(coef16)); you could use: int32x4_t coef1 = vmovl_high_s16(coef16); and instead of: int64x2_t b1 = vmlal_s32(b0, vget_high_s32(a0), vget_high_s32(coef0)); you could use: int64x2_t b1 = vmlal_high_s32(b0, a0, coef0); and instead of: int64x1_t c = vadd_s64(vget_low_s64(b3), vget_high_s64(b3)); int64x1_t cS = vshr_n_s64(c, 16); int32x2_t d = vreinterpret_s32_s64(cS); out = vget_lane_s32(d,...
2017 Apr 25
2
2 patches related to silk_biquad_alt() optimization
...A1_L_Q28 ), 14 ); S[ 1 ] = silk_SMLAWB( S[ 1 ], out32_Q14, A1_U_Q28 ); S[ 1 ] = silk_SMLAWB( S[ 1 ], B_Q28[ 2 ], inval ); A_Q28 is split to 2 14-bit (or 16-bit, whatever) integers, to make the multiplication operation within 32-bits. NEON can do 32-bit x 32-bit = 64-bit using 'int64x2_t vmull_s32(int32x2_t a, int32x2_t b)', and it could possibly be faster and less rounding/shifting errors than above C code. But it may increase difficulties for other CPUs not supporting 32-bit multiplication. Thanks, Linfeng -------------- next part -------------- An HTML attachment was scrubb...
2015 Dec 23
6
[AArch64 neon intrinsics v4 0/5] Rework Neon intrinsic code for Aarch64 patchset
Following Tim's comments, here are my reworked patches for the Neon intrinsic function patches of of my Aarch64 patchset, i.e. replacing patches 5-8 of the v2 series. Patches 1-4 and 9-18 of the old series still apply unmodified. The one new (as opposed to changed) patch is the first one in this series, to add named constants for the ARM architecture variants. There are also some minor code
2017 Apr 26
2
2 patches related to silk_biquad_alt() optimization
On Tue, Apr 25, 2017 at 10:31 PM, Jean-Marc Valin <jmvalin at jmvalin.ca> wrote: > > > A_Q28 is split to 2 14-bit (or 16-bit, whatever) integers, to make the > > multiplication operation within 32-bits. NEON can do 32-bit x 32-bit = > > 64-bit using 'int64x2_t vmull_s32(int32x2_t a, int32x2_t b)', and it > > could possibly be faster and less rounding/shifting errors than above C > > code. But it may increase difficulties for other CPUs not supporting > > 32-bit multiplication. > > OK, so I'm not totally opposed to that, bu...
2011 Nov 23
4
[LLVMdev] arm neon intrinsics cross compile error on windows system
...a, 18); } ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ d:/llvm_projects/llvm-3.0rc4/bin/../lib/clang/3.0/include\arm_neon.h:357:44: error: invalid conversion between vector type 'int8x8_t' and integer type 'int32x2_t' (aka 'long') of different size return (int64x2_t)__builtin_neon_vmovl_v((int8x8_t)__a, 19); } ^~~~~~~~~~~~~ d:/llvm_projects/llvm-3.0rc4/bin/../lib/clang/3.0/include\arm_neon.h:361:10: error: invalid conversion between vector type '__attribute__((__vector_size__(16 * sizeof(signed char)))) signed cha...
2017 May 15
2
2 patches related to silk_biquad_alt() optimization
...<jmvalin at jmvalin.ca <mailto:jmvalin at jmvalin.ca>> wrote: > > > > A_Q28 is split to 2 14-bit (or 16-bit, whatever) integers, to make the > > multiplication operation within 32-bits. NEON can do 32-bit x 32-bit = > > 64-bit using 'int64x2_t vmull_s32(int32x2_t a, int32x2_t b)', and it > > could possibly be faster and less rounding/shifting errors than above C > > code. But it may increase difficulties for other CPUs not supporting > > 32-bit multiplication. > > OK, so I'...
2017 Apr 26
0
2 patches related to silk_biquad_alt() optimization
...o gain 0.8%. It would just be good to check that the 0.8% indeed comes from Neon as opposed to just unrolling the channels. > A_Q28 is split to 2 14-bit (or 16-bit, whatever) integers, to make the > multiplication operation within 32-bits. NEON can do 32-bit x 32-bit = > 64-bit using 'int64x2_t vmull_s32(int32x2_t a, int32x2_t b)', and it > could possibly be faster and less rounding/shifting errors than above C > code. But it may increase difficulties for other CPUs not supporting > 32-bit multiplication. OK, so I'm not totally opposed to that, but it increases the testi...