Displaying 8 results from an estimated 8 matches for "dotproduct".
2012 Mar 07
2
dot products
Hello,
I need to take a dot product of each row of a dataframe and a vector.
The number of columns will be dynamic. The way I've been doing it so
far is contorted. Is there a better way?
dotproduct <- function(dataf, v2) {
apply(t(t(as.matrix(a)) * v2),1,sum) #contorted!
}
df = data.frame(a=c(1,2,3),b=c(4,5,6))
vec = c(4,5)
dotproduct(df, vec)
thanks,
allie
2013 Apr 03
2
[LLVMdev] Packed instructions generaetd by LoopVectorize?
...not be required I tried doing the same to the input arrays of my dot product example but it still doesn't generate packed float or double instructions.
Is the loop vectorizer supposed to generate packed float and double instructions? Is this a bug, or am I doing something wrong?
Tyler
float dotproduct(float *A, float *B, int n) {
float sum = 0;
for(int i = 0; i < n; ++i) {
sum += A[i] * B[i];
}
return sum;
}
clang dotproduct.cpp -O3 -fvectorize -march=atom -S -o -
<loop body>
.LBB1_1:
movss (%rdi), %xmm1
addq $4, %rdi
mulss (%rsi), %xmm1...
2013 Apr 03
0
[LLVMdev] Packed instructions generaetd by LoopVectorize?
...g the same to the input arrays of my dot product example but it still doesn’t generate packed float or double instructions.
>
> Is the loop vectorizer supposed to generate packed float and double instructions? Is this a bug, or am I doing something wrong?
>
> Tyler
>
> float dotproduct(float *A, float *B, int n) {
> float sum = 0;
> for(int i = 0; i < n; ++i) {
> sum += A[i] * B[i];
> }
> return sum;
> }
>
> clang dotproduct.cpp -O3 -fvectorize -march=atom -S -o -
>
> <loop body>
> .LBB1_1:
> movss (%rdi), %xm...
2013 Apr 04
1
[LLVMdev] Packed instructions generaetd by LoopVectorize?
...not be required I tried doing the same to the input arrays of my dot product example but it still doesn't generate packed float or double instructions.
Is the loop vectorizer supposed to generate packed float and double instructions? Is this a bug, or am I doing something wrong?
Tyler
float dotproduct(float *A, float *B, int n) {
float sum = 0;
for(int i = 0; i < n; ++i) {
sum += A[i] * B[i];
}
return sum;
}
clang dotproduct.cpp -O3 -fvectorize -march=atom -S -o -
<loop body>
.LBB1_1:
movss (%rdi), %xmm1
addq $4, %rdi
mulss (%rsi), %xmm1...
2006 May 26
8
Comparing two documents in the index
I want to compare two documents in the index (i.e. retrieve the cosine
similarity/score between two documents term-vector''s). Is this possible
using the standard Ferret functionality?
Thanks in advance,
Jeroen Bulters
--
Posted via http://www.ruby-forum.com/.
2004 Aug 06
0
[PATCH] Make SSE Run Time option.
...ple vector floats are usually faster
in the scalar units. The add across and transfer to scalar is just too
expensive. Its generally only worthwhile if the data starts and ends in
the vector units, and it is inlined so that latencies can be covered with
other work. e.g:
inline vector float DotProduct( vector float a, vector float b )
{
a = vec_madd( a, b, (vector float) vec_splat_u32(0) ) ;
a = vec_add( a, vec_sld( a, a, 8 ) );
a = vec_add( a, vec_sld( a, a, 4 ) );
return a;
}
Ian
----------------------------------...
2003 Sep 12
2
Vorbis overview
I intend to finish my Vorbis document on Monday so I will leave it over the
weekend to see if there are any more modifications suggested. Although the
document will be internal I will put a copy of the final version at the
same address - I know firsthand how annoying it is to find loads of dead
links when trying to find background info on Vorbis (I know google has
probably already got a link
2004 Aug 06
6
[PATCH] Make SSE Run Time option.
So we ran the code on a Windows XP based Atholon XP system and the xmm
registers work just fine so it appears that Windows 2000 and below does not
support them.
We agree on not supporting the non-FP version, however the run time flags
need to be settable with a non FP SSE mode so that exceptions are avoided.
I thus propose a set of defines like this instead of the ones in our
initial patch: