search for: mult16_32_q16

Displaying 13 results from an estimated 13 matches for "mult16_32_q16".

Did you mean: mult16_32_q15
2015 Nov 13
2
[Aarch64 00/11] Patches to enable Aarch64
...I don't want to beat a dead horse, but I was very surprised by your benchmarks so I took a little closer look. I think what's happening is that it's a little unfair to compare the ARM64 inline assembly to the C code, because looking at the C macros in "fixed_generic.h" for MULT16_32_Q16 and MULT16_32_Q15 you find they are implemented with two multiplies, two shifts, an AND and an ADD. It's not hard for me to believe that your inline assembly is faster than that mess. But on a 64-bit machine, there's no reason to go through all that when a simple 64-bit multiply will do...
2015 Nov 13
2
[Aarch64 00/11] Patches to enable Aarch64
...nt to beat a dead horse, but I was very surprised by your benchmarks so I took a little closer look. >> >> I think what's happening is that it's a little unfair to compare the ARM64 inline assembly to the C code, because looking at the C macros in "fixed_generic.h" for MULT16_32_Q16 and MULT16_32_Q15 you find they are implemented with two multiplies, two shifts, an AND and an ADD. It's not hard for me to believe that your inline assembly is faster than that mess. But on a 64-bit machine, there's no reason to go through all that when a simple 64-bit multiply will do. Th...
2015 Nov 12
2
[Aarch64 00/11] Patches to enable Aarch64
One other minor thing: I notice that in the inline assembly the result (rd) is constrained as an earlyclobber operand. What was the reason for that?
2015 Nov 13
0
[Aarch64 00/11] Patches to enable Aarch64
...39;t want to beat a dead horse, but I was very surprised by your benchmarks so I took a little closer look. > > I think what's happening is that it's a little unfair to compare the ARM64 inline assembly to the C code, because looking at the C macros in "fixed_generic.h" for MULT16_32_Q16 and MULT16_32_Q15 you find they are implemented with two multiplies, two shifts, an AND and an ADD. It's not hard for me to believe that your inline assembly is faster than that mess. But on a 64-bit machine, there's no reason to go through all that when a simple 64-bit multiply will do. Th...
2015 Aug 05
0
[PATCH 4/8] Arm64 assembly for Celt fixed-point math.
...RT (INCLUDING + NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS + SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +*/ + +#ifndef FIXED_ARM64_H +#define FIXED_ARM64_H + +/** 16x32 multiplication, followed by a 16-bit shift right. Results fits in 32 bits */ +#undef MULT16_32_Q16 +static OPUS_INLINE opus_val32 MULT16_32_Q16_arm64(opus_val16 a, opus_val32 b) +{ + opus_int64 rd; + __asm__( + "smull %x0, %w1, %w2\n\t" + : "=&r"(rd) + : "%r"(b), "r"(a<<16) + ); + return (rd >> 32); +} +#define MULT16_32_...
2015 Nov 07
0
[Aarch64 06/11] Add aarch64 assembly for Celt fixed-point math.
...RT (INCLUDING + NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS + SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +*/ + +#ifndef FIXED_ARM64_H +#define FIXED_ARM64_H + +/** 16x32 multiplication, followed by a 16-bit shift right. Results fits in 32 bits */ +#undef MULT16_32_Q16 +static OPUS_INLINE opus_val32 MULT16_32_Q16_arm64(opus_val16 a, opus_val32 b) +{ + opus_int64 rd; + __asm__( + "smull %x0, %w1, %w2\n\t" + : "=&r"(rd) + : "%r"(b), "r"(a<<16) + ); + return (rd >> 32); +} +#define MULT16_32_...
2015 Nov 16
0
[Aarch64 00/11] Patches to enable Aarch64
...nd I don't want to beat a dead horse, but I was very surprised by your benchmarks so I took a little closer look. I think what's happening is that it's a little unfair to compare the ARM64 inline assembly to the C code, because looking at the C macros in "fixed_generic.h" for MULT16_32_Q16 and MULT16_32_Q15 you find they are implemented with two multiplies, two shifts, an AND and an ADD. It's not hard for me to believe that your inline assembly is faster than that mess. But on a 64-bit machine, there's no reason to go through all that when a simple 64-bit multiply will do. Th...
2015 Nov 16
0
[Fast Int64 2/4] Add OPUS_FAST_INT64 flavors of celt/fixed_generic.h macros.
...7d37..1cfd6d6 100644 --- a/celt/fixed_generic.h +++ b/celt/fixed_generic.h @@ -37,16 +37,32 @@ #define MULT16_16SU(a,b) ((opus_val32)(opus_val16)(a)*(opus_val32)(opus_uint16)(b)) /** 16x32 multiplication, followed by a 16-bit shift right. Results fits in 32 bits */ +#if OPUS_FAST_INT64 +#define MULT16_32_Q16(a,b) ((opus_val32)SHR((opus_int64)((opus_val16)(a))*(b),16)) +#else #define MULT16_32_Q16(a,b) ADD32(MULT16_16((a),SHR((b),16)), SHR(MULT16_16SU((a),((b)&0x0000ffff)),16)) +#endif /** 16x32 multiplication, followed by a 16-bit shift right (round-to-nearest). Results fits in 32 bits */ +#if...
2015 Nov 16
3
[Fast Int64 1/4] Move OPUS_FAST_INT64 definition to celt/arch.h.
--- celt/arch.h | 5 +++++ silk/macros.h | 4 +--- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/celt/arch.h b/celt/arch.h index 9f74ddd..670527b 100644 --- a/celt/arch.h +++ b/celt/arch.h @@ -78,6 +78,11 @@ static OPUS_INLINE void _celt_fatal(const char *str, const char *file, int line) #define UADD32(a,b) ((a)+(b)) #define USUB32(a,b) ((a)-(b)) +/* Set this if opus_int64
2013 May 17
1
[Patch]01-Add ARM5E macros
...RT (INCLUDING + NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS + SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +*/ + +#ifndef FIXED_ARM5E_H +#define FIXED_ARM5E_H + +/** 16x32 multiplication, followed by a 16-bit shift right. Results fits in 32 bits */ +#undef MULT16_32_Q16 +static inline opus_val32 MULT16_32_Q16(opus_val16 a, opus_val32 b) +{ + int res; + __asm__( + "smulwb %0, %1, %2;\n" + : "=&r"(res) + : "%r"(b),"r"(a) + ); + return res; +} + + +/** 16x32 multiplication, followed by a 15-bit shift...
2015 Nov 21
8
[Aarch64 v2 10/18] Clean up some intrinsics-related wording in configure.
--- configure.ac | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/configure.ac b/configure.ac index f52d2c2..e1a6e9b 100644 --- a/configure.ac +++ b/configure.ac @@ -190,7 +190,7 @@ AC_ARG_ENABLE([rtcd], [enable_rtcd=yes]) AC_ARG_ENABLE([intrinsics], - [AS_HELP_STRING([--disable-intrinsics], [Disable intrinsics optimizations for ARM(float) X86(fixed)])],, +
2015 Aug 05
8
[PATCH 0/8] Patches for arm64 (aarch64) support
This sequence of patches provides arm64 support for Opus. Tested on iOS, Android, and Ubuntu 14.04. The patch sequence was written on top of Viswanath Puttagunta's Ne10 patches, but all but the second ("Reorganize pitch_arm.h") should, I think, apply independently of it. It does depends on my previous intrinsics configury reorganization, however. Comments welcome. With this and
2015 Nov 07
12
[Aarch64 00/11] Patches to enable Aarch64 (arm64) optimizations, rebased to current master.
Here are my aarch64 patches rebased to the current tip of Opus master. They're largely the same as my previous patch set, with the addition of the final one (the Neon fixed-point implementation of xcorr_kernel). This replaces Viswanath's Neon fixed-point celt_pitch_xcorr, since xcorr_kernel is used in celt_fir and celt_iir as well. These have been tested for correctness under qemu