search for: slower

Displaying 20 results from an estimated 5504 matches for "slower".

Did you mean: lower
2016 Nov 16
3
LLD: time to enable --threads by default
...8x faster chromium fast master 1.823614026 patch 1.686059427 1.08158348205x faster the gold plugin master 0.340167513 patch 0.318601465 1.06768973269x faster clang master 0.579914119 patch 0.520784947 1.11353855817x faster llvm-as master 0.03323043 patch 0.041571719 1.251013574x slower the gold plugin fsds master 0.36675887 patch 0.350970944 1.04498356992x faster clang fsds master 0.656180056 patch 0.591607603 1.10914743602x faster llvm-as fsds master 0.030324313 patch 0.040045353 1.32056917497x slower scylla master 3.23378908 patch 2.019191831 1.60152642773x...
2016 Nov 17
3
LLD: time to enable --threads by default
...elcome there! It think Amaury pointed it originally and he had an alternative implementation IIRC. — Mehdi > On Nov 16, 2016, at 3:58 PM, Rui Ueyama via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > By the way, while running benchmark, I found that our SHA1 function seems much slower than the one in gold. gold slowed down by only 1.3 seconds to compute a SHA1 of output, but we spent 6.0 seconds to do the same thing (I believe). Something doesn't seem right. > > Here is a table to link the same binary with -no-threads and -build-id={none,md5,sha1}. The numbers are in...
2016 Nov 17
2
LLD: time to enable --threads by default
...ive implementation IIRC. > > — > Mehdi > >> On Nov 16, 2016, at 3:58 PM, Rui Ueyama via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> >> By the way, while running benchmark, I found that our SHA1 function seems much slower than the one in gold. gold slowed down by only 1.3 seconds to compute a SHA1 of output, but we spent 6.0 seconds to do the same thing (I believe). Something doesn't seem right. >> >> Here is a table to link the same binary with -no-threads and -build-id={none,md5,sha1}. The numbers...
2010 Aug 08
0
[LLVMdev] MmapAllocator
...tp://tinyurl.com/36q766k > > > > > And to gauge the performance impact, I also ran the speed tests. It > seems using mmap()/munmap() has very little performance impact in > either direction, so that's good: > > ### 2to3 ### > 35.590589 -> 35.824554: 1.0066x slower > > ### bzr_startup ### > Min: 0.157976 -> 0.155976: 1.0128x faster > Avg: 0.167575 -> 0.168924: 1.0081x slower > Not significant > Stddev: 0.00334 -> 0.00716: 2.1463x larger > Timeline: http://tinyurl.com/39thymp > > ### call_method ### > Min: 0.878663 -&gt...
2010 Aug 08
4
[LLVMdev] MmapAllocator
...> 6844.000: 1.0023x smaller Usage over time: http://tinyurl.com/36q766k And to gauge the performance impact, I also ran the speed tests. It seems using mmap()/munmap() has very little performance impact in either direction, so that's good: ### 2to3 ### 35.590589 -> 35.824554: 1.0066x slower ### bzr_startup ### Min: 0.157976 -> 0.155976: 1.0128x faster Avg: 0.167575 -> 0.168924: 1.0081x slower Not significant Stddev: 0.00334 -> 0.00716: 2.1463x larger Timeline: http://tinyurl.com/39thymp ### call_method ### Min: 0.878663 -> 0.884666: 1.0068x slower Avg: 0.887148 -> 0.8...
2008 Nov 11
3
Use the NEW ulaw/alaw codecs (slower, but cleaner)
In Asterisk 1.6, there is an option to use the 'new g.711 algorithm'. "Use the NEW ulaw/alaw codec's (slower, but cleaner)" By slower does this mean more 'expensive', or does it instead mean that there will be more algorithmic latency? Both? Can anyone speak to the relative increases? With regard to accuracy, can anyone speak to what kind of situation might demonstrate the benefit of the...
2015 Jul 11
3
[LLVMdev] JIT compilation 2-3 times slower in latest LLVM snapshot
Hi I updated my clone of the LLVM github mirror today and I am finding that the JIT compilation is now 2-3 times slower. The last time I refreshed was maybe 2 weeks ago. Is there a known change that would explain this? This is on Windows 8.1 64-bit. I am using MCJIT. Thanks and Regards Dibyendu
2006 Aug 23
3
Rsync push is slower compared to pull
Hi, It has been observed that rsync push mode is much slower when compared to pull (On Identical scenarios). Building/receiving file list takes almost same time. But data transfer is much slower, whose transfer ratios are ranging from 1:3 to 1:5. On pull operation data transfer speed is consistently around 3.5 MB/Sec and it reached 10 MB/Sec. However, on pus...
2009 Nov 19
0
[LLVMdev] Google's Go
On Thursday 19 November 2009 19:48:18 Owen Anderson wrote: > On Nov 19, 2009, at 10:25 AM, Jon Harrop wrote: > >> In this case, the assertion that LLVM is slow is correct: it's > >> definitely slower than a non-optimizing compiler. > > > > I'm *very* surprised by this and will test it myself... I've tested it and LLVM is indeed 2x slower to compile, although it generates code that is 2x faster to run... > Compared to a compiler in the same category as PCC, whose pinnac...
2009 Nov 19
7
[LLVMdev] Google's Go
On Nov 19, 2009, at 10:25 AM, Jon Harrop wrote: > > >> In this case, the assertion that LLVM is slow is correct: it's >> definitely slower than a non-optimizing compiler. > > I'm *very* surprised by this and will test it myself... Compared to a compiler in the same category as PCC, whose pinnacle of optimization is doing register allocation? I'm not surprised at all. --Owen -------------- next part -------------- An...
2006 May 19
1
imdex.update is 10 times slower than index.add_doc. Normal?
Hi, I am seeing that doc = index[''mykey''] index.update ''mykey'', doc is about 10 times slower than doc = Document.new doc[''id''] = ''mykey'' index << doc It looks like #update is _much_ slower that #<<. Is it as expected? Sergei. -- Posted via http://www.ruby-forum.com/.
2012 Sep 04
4
[LLVMdev] Clang/llvm performance tests on FreeBSD 10.0-CURRENT
...1878 1878 1878 0 nvcsw 6 59792 67936 60369 61859.833 3186.3204 nivcsw 6 2867702 4546665 4361653 3753550.8 769382.51 Summary: -------- For building this specific large C++ program, gcc 4.2.1 is ~86% slower than clang 3.1 in real time, ~82% slower in user time, and ~176% slower in system time. The maximum resident set size during building is ~217% larger, and it causes ~279% more page reclaims. Though gcc 4.7.1 is faster than its older version, it is still ~68% slower than clang 3.1 in real time, ~6...
2015 Jul 11
2
[LLVMdev] JIT compilation 2-3 times slower in latest LLVM snapshot
If you could create a single executable with a repro, and publish them somewhere (both the old llvm and new one which is slower), we could look at it. It's hard to say otherwise unless someone truly knows of a change. I'm interested in looking at this, because my experience has been opposite, compile times improved for my compiler in the past month. So I'm inclined to understand what your codegen path is doing...
2020 Oct 06
3
Performance Question: Lots of Small Files vs One Large File
...to > > Windows, when we have 1 large 1GB file, the write performance to storage > is > > very fast, even over distances of 5000 miles. However, even writes to > local > > Samba servers, with 100 10MB files being copied onto a shared drive, > > Windows Explorer is MUCH slower. I don't know if it's really Samba, but > > more than likely Windows. > > > > Does anyone on channel have experience with copying multiple small files > > onto shares? Are there any ways to make copying files onto a share > faster? > > to save you a lot rock...
2007 Jan 11
1
RC16 is a lot slower that RC7
I tried RC16 from RC7 and like other releases since RC7 it is significantly slower. I'm still trying to figure out why. But I have again switched back. Do new version take up a lot more ram perhaps? I'm running with a gig of ram, and I'm running a few other apps. It's not a fast computer, just an old 754 pin semptron. But the point is that RC7 runs a lot faste...
2005 Oct 13
1
1.0 alpha3 slower?
Hi Is 1.0 alpha3 a bit slower than the dovecot-1.0.stable branch? Regards Daniel
2009 May 06
3
Wine slower on x64 machines
i Have noticed that wine runs slower on x64 machines why is that? is there any way to overcome it?
2002 Oct 09
1
Why is vp31 codec 2x - 3x slower than DivX codec ?
I have a simple test program which encodes, decodes and displays a single video stream ( at 2-3 fps ). It takes about 15% of CPU time when it uses DivX codec, but 35% when the VP31 codec is active. Is there any way to speed encoding/decoding with VP31 ? Or maybe VP31 is just much slower than DivX ? Kamil <p>--- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project homepage: http://www.xiph.org/ogg/ To unsubscribe from this list, send a message to 'theora-dev-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is...
2015 Jul 11
2
[LLVMdev] JIT compilation 2-3 times slower in latest LLVM snapshot
On 11 July 2015 at 13:14, Caldarale, Charles R <Chuck.Caldarale at unisys.com> wrote: >> From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] >> On Behalf Of Dibyendu Majumdar >> Subject: [LLVMdev] JIT compilation 2-3 times slower in latest LLVM snapshot > >> I updated my clone of the LLVM github mirror today and I am finding >> that the JIT compilation is now 2-3 times slower. The last time I >> refreshed was maybe 2 weeks ago. Is there a known change that would >> explain this? > > Debug+As...
2007 Oct 23
4
How to debug slow operation
Hello, I am no programmer at all. I have dovecot set up on two very similar machines. On one of the boxes, the dovecot server has recently been showing slower response times. It feels slower. Where it is slower, dovecot works as an auth server for postfix. I have replaced exim with postfix on that box and I think since then dovecot started to slow down. On the other machine the MTA is exim. I'd like to ask how to go about debugging the problem. N...