Displaying 15 results from an estimated 15 matches for "xsum2".
Did you mean:
sum2
2013 Jun 07
2
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
...ialized memory. Here's a version
I wrote a few days ago you're welcome to use that doesn't suffer from
that problem:
static inline void xcorr_kernel(const opus_val16 *x, const opus_val16
*y, opus_val32 sum[4], int len)
{
int j;
__m128 xsum1 = _mm_loadu_ps(sum);
__m128 xsum2 = _mm_setzero_ps();
for (j = 0; j < len-3; j += 4) {
const __m128 x0 = _mm_loadu_ps(x+j);
const __m128 y0 = _mm_loadu_ps(y+j);
const __m128 y3 = _mm_loadu_ps(y+j+3);
xsum1 =
_mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0x00),y0));
xsum2 =...
2013 Jun 07
0
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
...wrote a few days ago you're welcome to use that doesn't suffer from
> that problem:
>
> static inline void xcorr_kernel(const opus_val16 *x, const opus_val16
> *y, opus_val32 sum[4], int len)
> {
> int j;
> __m128 xsum1 = _mm_loadu_ps(sum);
> __m128 xsum2 = _mm_setzero_ps();
>
> for (j = 0; j < len-3; j += 4) {
> const __m128 x0 = _mm_loadu_ps(x+j);
> const __m128 y0 = _mm_loadu_ps(y+j);
> const __m128 y3 = _mm_loadu_ps(y+j+3);
>
> xsum1 =
> _mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffl...
2013 Jun 07
1
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
...sion of the
NEON xcorr_kernel that is almost identical to the SSE version, and more
in line with Mr. Zanelli's code:
static inline void xcorr_kernel_neon(const opus_val16 *x, const
opus_val16 *y, opus_val32 sum[4], int len)
{
int j;
int32x4_t xsum1 = vld1q_s32(sum);
int32x4_t xsum2 = vdupq_n_s32(0);
for (j = 0; j < len-3; j += 4) {
int16x4_t x0 = vld1_s16(x+j);
int16x4_t y0 = vld1_s16(y+j);
int16x4_t y3 = vld1_s16(y+j+3);
int16x4_t y4 = vext_s16(y3,y3,1);
xsum1 = vmlal_s16(xsum1,vdup_lane_s16(x0,0),y0);
xsum2 = v...
2013 Jun 07
2
Bug fix in celt_lpc.c and some xcorr_kernel optimizations
Hi JM,
I have no doubt that Mr. Zanelli's NEON code is faster, since hand tuned
assembly is bound to be faster than using intrinsics. However I notice
that his code can also read past the y buffer.
Cheers,
--John
On 6/6/2013 9:22 PM, Jean-Marc Valin wrote:
> Hi John,
>
> Thanks for the two fixes. They're in git now. Your SSE version seems to
> also be slightly faster than
2015 Mar 12
2
[RFC PATCHv2] Intrinsics/RTCD related fixes. Mostly x86.
Nit: in dual_inner_prod_sse, why not do both horizontal sums at the same
time? As in:
xsum1 = _mm_add_ps(_mm_movelh_ps(xsum1, xsum2),
_mm_movehl_ps(xsum2, xsum1));
xsum1 = _mm_add_ps(xsum1, _mm_shuffle_ps(xsum1, xsum1, 0xf5));
_mm_store_ss(xy1, xsum1);
_mm_store_ss(xy2, _mm_movehl_ps(xsum1, xsum1));
--John
2013 Jun 10
0
opus Digest, Vol 53, Issue 2
...version of the
NEON xcorr_kernel that is almost identical to the SSE version, and more
in line with Mr. Zanelli's code:
static inline void xcorr_kernel_neon(const opus_val16 *x, const
opus_val16 *y, opus_val32 sum[4], int len)
{
int j;
int32x4_t xsum1 = vld1q_s32(sum);
int32x4_t xsum2 = vdupq_n_s32(0);
for (j = 0; j < len-3; j += 4) {
int16x4_t x0 = vld1_s16(x+j);
int16x4_t y0 = vld1_s16(y+j);
int16x4_t y3 = vld1_s16(y+j+3);
int16x4_t y4 = vext_s16(y3,y3,1);
xsum1 = vmlal_s16(xsum1,vdup_lane_s16(x0,0),y0);
xsum2 = v...
2015 Mar 13
1
[RFC PATCH v3] Intrinsics/RTCD related fixes. Mostly x86.
..._si32(acc1);
-
- for (;i<N;i++)
- {
- sum = silk_SMLABB(sum, x[i], y[i]);
- }
+#include <xmmintrin.h>
+#include "arch.h"
- return sum;
+void xcorr_kernel_sse(const opus_val16 *x, const opus_val16 *y, opus_val32 sum[4], int len)
+{
+ int j;
+ __m128 xsum1, xsum2;
+ xsum1 = _mm_loadu_ps(sum);
+ xsum2 = _mm_setzero_ps();
+
+ for (j = 0; j < len-3; j += 4)
+ {
+ __m128 x0 = _mm_loadu_ps(x+j);
+ __m128 yj = _mm_loadu_ps(y+j);
+ __m128 y3 = _mm_loadu_ps(y+j+3);
+
+ xsum1 = _mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0x00),yj)...
2015 Mar 12
1
[RFC PATCHv2] Intrinsics/RTCD related fixes. Mostly x86.
..._si32(acc1);
-
- for (;i<N;i++)
- {
- sum = silk_SMLABB(sum, x[i], y[i]);
- }
+#include <xmmintrin.h>
+#include "arch.h"
- return sum;
+void xcorr_kernel_sse(const opus_val16 *x, const opus_val16 *y, opus_val32 sum[4], int len)
+{
+ int j;
+ __m128 xsum1, xsum2;
+ xsum1 = _mm_loadu_ps(sum);
+ xsum2 = _mm_setzero_ps();
+
+ for (j = 0; j < len-3; j += 4)
+ {
+ __m128 x0 = _mm_loadu_ps(x+j);
+ __m128 yj = _mm_loadu_ps(y+j);
+ __m128 y3 = _mm_loadu_ps(y+j+3);
+
+ xsum1 = _mm_add_ps(xsum1,_mm_mul_ps(_mm_shuffle_ps(x0,x0,0x00),yj)...
2015 Mar 02
13
Patch cleaning up Opus x86 intrinsics configury
The attached patch cleans up Opus's x86 intrinsics configury.
It:
* Makes ?enable-intrinsics work with clang and other non-GCC compilers
* Enables RTCD for the floating-point-mode SSE code in Celt.
* Disables use of RTCD in cases where the compiler targets an instruction set by default.
* Enables the SSE4.1 Silk optimizations that apply to the common parts of Silk when Opus is built in
2015 Mar 18
5
[RFC PATCH v1 0/4] Enable aarch64 intrinsics/Ne10
Hi All,
Since I continue to base my work on top of Jonathan's patch,
and my previous Ne10 fft/ifft/mdct_forward/backward patches,
I thought it would be better to just post all new patches
as a patch series. Please let me know if anyone disagrees
with this approach.
You can see wip branch of all latest patches at
https://git.linaro.org/people/viswanath.puttagunta/opus.git
Branch:
2015 Mar 31
6
[RFC PATCH v1 0/5] aarch64: celt_pitch_xcorr: Fixed point series
Hi Timothy,
As I mentioned earlier [1], I now fixed compile issues
with fixed point and resubmitting the patch.
I also have new patch that does intrinsics optimizations
for celt_pitch_xcorr targetting aarch64.
You can find my latest work-in-progress branch at [2]
For reference, you can use the Ne10 pre-built libraries
at [3]
Note that I am working with Phil at ARM to get my patch at [4]
2003 Aug 13
2
rowsum() may return a vector instead of a matrix (PR#3737)
If all rows are in the same "group", rowsum() returns a vector instead of a
(1xN) matrix, contrary to documentation:
R> print(z <- rowsum(matrix(1:12, 3,4), rep("x",3)))
[1] 6 15 24 33
R> dim(z)
NULL
It worked correctly in version 1.4.0 but was broken by version 1.6.1. I'm
currently using 1.7.1 under Solaris 2.8.
--please do not edit the information
2015 May 08
8
[RFC PATCH v2]: Ne10 fft fixed and previous 0/8]
Hi All,
As per Timothy's suggestion, disabling mdct_forward
for fixed point. Only effects
armv7,armv8: Extend fixed fft NE10 optimizations to mdct
Rest of patches are same as in [1]
For reference, latest wip code for opus is at [2]
Still working with NE10 team at ARM to get corner cases of
mdct_forward. Will update with another patch
when issue in NE10 gets fixed.
Regards,
Vish
[1]:
2015 May 15
11
[RFC V3 0/8] Ne10 fft fixed and previous
Hi All,
Changes from RFC v2 [1]
armv7,armv8: Extend fixed fft NE10 optimizations to mdct
- Overflow issue fixed by Phil at ARM. Ne10 wip at [2]. Should be upstream soon.
- So, re-enabled using fixed fft for mdct_forward which was disabled in RFCv2
armv7,armv8: Optimize fixed point fft using NE10 library
- Thanks to Jonathan Lennox, fixed some build fixes on iOS and some copy-paste errors
Rest
2015 Apr 28
10
[RFC PATCH v1 0/8] Ne10 fft fixed and previous
Hello Timothy / Jean-Marc / opus-dev,
This patch series is follow up on work I posted on [1].
In addition to what was posted on [1], this patch series mainly
integrates Fixed point FFT implementations in NE10 library into opus.
You can view my opus wip code at [2].
Note that while I found some issues both with the NE10 library(fixed fft)
and with Linaro toolchain (armv8 intrinsics), the work