2009/9/16 Olivier Meurant <meurant.olivier at gmail.com>:> Average 13836499.46 12447973.17 1388526.29 10.03 > Standard deviation 53189.13 515638.56 > 522400.98 3.77That was pretty much what I was expecting from the article... numbers. It doesn't matter who's best, you can't be best in all areas, but profiling must be done right. The standard deviation is, at least, one order of magnitude lower than the difference between the two averages, which means the measurements have a meaning and express some reality. There is little value in the results on the article. People had to re-do it properly again, anyway... Shouldn't it be part of the standard release? I mean, add profiling as an automated task before every big release, comparing to previous versions of LLVM and other important compilers. Not to be waving the results about, but to know the weakness and work on them on the next release. It might take a while to build such infrastructure, but it's a good thing to do, I guess. cheers, --renato Reclaim your digital rights, eliminate DRM, learn more at http://www.defectivebydesign.org/what_is_drm
I have run the john the ripper test. I have used the official archive (same version as phoronix) from http://www.openwall.com/john/g/john-1.7.3.1.tar.bz2 To build with llvm-gcc, replace the line CC = gcc with CC = llvm-gcc. I have used the following command to build : make clean linux-x86-sse2 (seems to be the best on x86-32) The makefile invocation is "gcc -c -Wall -O2 -fomit-frame-pointer -funroll-loops src.c" and "llvm-gcc -c -Wall -O2 -fomit-frame-pointer -funroll-loops src.c". I have runned 10 tests : "./john --test" in the run directory. It does 6 benchmarking on various algorithms : Benchmarking: Traditional DES [128/128 BS SSE2]... DONE LLVM GCC Difference Difference % Run 1 2371 2358 13 0.55 Run 2 2499 2497 2 0.08 Run 3 2489 2487 2 0.08 Run 4 2305 2504 -199 -8.63 Run 5 2499 2445 54 2.16 Run 6 2404 2503 -99 -4.12 Run 7 2482 2502 -20 -0.81 Run 8 2479 2475 4 0.16 Run 9 2463 2489 -26 -1.06 Run 10 2484 2483 1 0.04 Average 2447.5 2474.3 -26.8 -1.15 Std dev 65.69 44.5 71.81 3.07 ==> Similar results Benchmarking: BSDI DES (x725) [128/128 BS SSE2]... DONE LLVM GCC Difference Difference % Run 1 72584 81280 -8696 -11.98 Run 2 76620 79795 -3175 -4.14 Run 3 79820 75264 4556 5.71 Run 4 76339 81484 -5145 -6.74 Run 5 81484 76441 5043 6.19 Run 6 80742 81433 -691 -0.86 Run 7 81510 79104 2406 2.95 Run 8 81049 79872 1177 1.45 Run 9 80409 81100 -691 -0.86 Run 10 80204 80921 -717 -0.89 Average 79076.1 79669.4 -593.3 -0.92 Std dev 2937.56 2181.15 4262.03 5.59 ==> Similar results Benchmarking: FreeBSD MD5 [32/32]... DONE LLVM GCC Difference Difference % Run 1 7552 8009 -457 -6.05 Run 2 7739 7724 15 0.19 Run 3 7997 7696 301 3.76 Run 4 8038 8041 -3 -0.04 Run 5 7474 7938 -464 -6.21 Run 6 7871 8078 -207 -2.63 Run 7 7884 7980 -96 -1.22 Run 8 7870 8025 -155 -1.97 Run 9 7989 8046 -57 -0.71 Run 10 7986 7989 -3 -0.04 Average 7840 7952.6 -112.6 -1.49 Std dev 193.87 133.78 227.92 2.98 ==> Similar results Benchmarking: OpenBSD Blowfish (x32) [32/32]... DONE LLVM GCC Difference Difference % Run 1 494 495 -1 -0.2 Run 2 457 485 -28 -6.13 Run 3 492 474 18 3.66 Run 4 494 492 2 0.4 Run 5 486 469 17 3.5 Run 6 491 495 -4 -0.81 Run 7 495 493 2 0.4 Run 8 490 490 0 0 Run 9 493 494 -1 -0.2 Run 10 493 492 1 0.2 Average 488.5 487.9 0.6 0.08 Std dev 11.37 9.19 12.56 2.67 ==> Similar results Benchmarking: Kerberos AFS DES [48/64 4K MMX]... DONE LLVM GCC Difference Difference % Run 1 399001 403712 -4711 -1.18 Run 2 396697 377292 19405 4.89 Run 3 395520 401971 -6451 -1.63 Run 4 392396 404172 -11776 -3 Run 5 392294 376217 16077 4.1 Run 6 395571 404172 -8601 -2.17 Run 7 400128 402995 -2867 -0.72 Run 8 397516 395110 2406 0.61 Run 9 396748 403507 -6759 -1.7 Run 10 396263 403712 -7449 -1.88 Average 396213.4 397286 -1072.6 -0.27 Std dev 2497.59 11150.79 10620.52 2.69 ==> Similar results Benchmarking: LM DES [128/128 BS SSE2]... DONE LLVM GCC Difference Difference % Run 1 6879 11433 -4554 -66.2 Run 2 8984 12252 -3268 -36.38 Run 3 9142 12182 -3040 -33.25 Run 4 8802 12205 -3403 -38.66 Run 5 8756 11971 -3215 -36.72 Run 6 9227 12224 -2997 -32.48 Run 7 8667 12191 -3524 -40.66 Run 8 9163 11942 -2779 -30.33 Run 9 9117 12254 -3137 -34.41 Run 10 9076 12166 -3090 -34.05 Average 8781.3 12082 -3300.7 -38.31 Std dev 695.06 252.95 487.97 10.26 ==> This one is interesting as gcc is better with near 40% I have no idea why but anyone interested could take a look at LM_fmt.c which seems to define test and source for this algorithm. Olivier. On Wed, Sep 16, 2009 at 2:40 PM, Renato Golin <rengolin at systemcall.org>wrote:> 2009/9/16 Olivier Meurant <meurant.olivier at gmail.com>: > > Average 13836499.46 12447973.17 1388526.29 10.03 > > Standard deviation 53189.13 515638.56 > > 522400.98 3.77 > > > That was pretty much what I was expecting from the article... numbers. > It doesn't matter who's best, you can't be best in all areas, but > profiling must be done right. The standard deviation is, at least, one > order of magnitude lower than the difference between the two averages, > which means the measurements have a meaning and express some reality. > > There is little value in the results on the article. People had to > re-do it properly again, anyway... > > Shouldn't it be part of the standard release? I mean, add profiling as > an automated task before every big release, comparing to previous > versions of LLVM and other important compilers. Not to be waving the > results about, but to know the weakness and work on them on the next > release. It might take a while to build such infrastructure, but it's > a good thing to do, I guess. > > cheers, > --renato > > Reclaim your digital rights, eliminate DRM, learn more at > http://www.defectivebydesign.org/what_is_drm >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20090916/90b427bd/attachment.html>
Stefano Delli Ponti
2009-Sep-16 13:46 UTC
[LLVMdev] FYI: Phoronix GCC vs. LLVM-GCC benchmarks
Olivier Meurant:> I have run the john the ripper test. > I have used the official archive (same version as phoronix) from > http://www.openwall.com/john/g/john-1.7.3.1.tar.bz2 > > To build with llvm-gcc, replace the line CC = gcc with CC = llvm-gcc. > I have used the following command to build : make clean linux-x86-sse2 > (seems to be the best on x86-32) > The makefile invocation is "gcc -c -Wall -O2 -fomit-frame-pointer > -funroll-loops src.c" and "llvm-gcc -c -Wall -O2 -fomit-frame-pointer > -funroll-loops src.c".I don't know what you think about this, but shouldn't it be more meaningful to make these tests with -O3? I mean, we ought to make the comparisons with the highest level of optimization available for both of the compilers. It is difficult to compare an intermediate level. Stefano
Possibly Parallel Threads
- [LLVMdev] FYI: Phoronix GCC vs. LLVM-GCC benchmarks
- [LLVMdev] Phoronix: Benchmarking LLVM & Clang Against GCC 4.5
- [LLVMdev] Phoronix: Benchmarking LLVM & Clang Against GCC 4.5
- [LLVMdev] FYI: Phoronix GCC vs. LLVM-GCC benchmarks
- [LLVMdev] Phoronix: Benchmarking LLVM & Clang Against GCC 4.5