Displaying 20 results from an estimated 5530 matches for "slowering".
Did you mean:
lowering
2016 Nov 16
3
LLD: time to enable --threads by default
On 16 November 2016 at 15:52, Rafael Espíndola
<rafael.espindola at gmail.com> wrote:
> I will do a quick benchmark run.
On a mac pro (running linux) the results I got with all cores available:
firefox
master 7.146418217
patch 5.304271767 1.34729488437x faster
firefox-gc
master 7.316743822
patch 5.46436812 1.33899174824x faster
chromium
master 4.265597914
patch
2016 Nov 17
3
LLD: time to enable --threads by default
SHA1 in LLVM is *very* naive, any improvement is welcome there!
It think Amaury pointed it originally and he had an alternative implementation IIRC.
—
Mehdi
> On Nov 16, 2016, at 3:58 PM, Rui Ueyama via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>
> By the way, while running benchmark, I found that our SHA1 function seems much slower than the one in gold. gold slowed down by
2016 Nov 17
2
LLD: time to enable --threads by default
The current implementation was “copy/pasted” from somewhere (it was explicitly public domain).
> On Nov 16, 2016, at 4:05 PM, Rui Ueyama <ruiu at google.com> wrote:
>
> Can we just copy-and-paste optimized code from somewhere?
>
> On Wed, Nov 16, 2016 at 4:03 PM, Mehdi Amini <mehdi.amini at apple.com <mailto:mehdi.amini at apple.com>> wrote:
> SHA1 in LLVM is
2010 Aug 08
0
[LLVMdev] MmapAllocator
Hi Steven-
Nice, but will this not break Windows? From an initial glance over your patch, it seems to assume the existence of mmap() in some form or other.
Alistair
On 8 Aug 2010, at 03:05, Steven Noonan wrote:
> Hi folks,
>
> I've been doing work on memory reduction in Unladen Swallow, and
> during testing, LiveRanges seemed to be consuming one of the largest
> chunks of
2010 Aug 08
4
[LLVMdev] MmapAllocator
Hi folks,
I've been doing work on memory reduction in Unladen Swallow, and
during testing, LiveRanges seemed to be consuming one of the largest
chunks of memory. I wrote a replacement allocator for use by
BumpPtrAllocator which uses mmap()/munmap() in place of
malloc()/free(). It has worked flawlessly in testing, and reduces
memory usage quite nicely in Unladen Swallow.
The code is available
2008 Nov 11
3
Use the NEW ulaw/alaw codecs (slower, but cleaner)
In Asterisk 1.6, there is an option to use the 'new g.711 algorithm'.
"Use the NEW ulaw/alaw codec's (slower, but cleaner)"
By slower does this mean more 'expensive', or does it instead mean that there will be more algorithmic latency? Both? Can anyone speak to the relative increases?
With regard to accuracy, can anyone speak to what kind of situation might
2015 Jul 11
3
[LLVMdev] JIT compilation 2-3 times slower in latest LLVM snapshot
Hi
I updated my clone of the LLVM github mirror today and I am finding
that the JIT compilation is now 2-3 times slower. The last time I
refreshed was maybe 2 weeks ago. Is there a known change that would
explain this?
This is on Windows 8.1 64-bit. I am using MCJIT.
Thanks and Regards
Dibyendu
2006 Aug 23
3
Rsync push is slower compared to pull
Hi,
It has been observed that rsync push mode is much slower when compared
to pull (On Identical scenarios). Building/receiving file list takes
almost same time. But data transfer is much slower, whose transfer
ratios are ranging from 1:3 to 1:5. On pull operation data transfer
speed is consistently around 3.5 MB/Sec and it reached 10 MB/Sec.
However, on push the maximum it could reach is
2009 Nov 19
0
[LLVMdev] Google's Go
On Thursday 19 November 2009 19:48:18 Owen Anderson wrote:
> On Nov 19, 2009, at 10:25 AM, Jon Harrop wrote:
> >> In this case, the assertion that LLVM is slow is correct: it's
> >> definitely slower than a non-optimizing compiler.
> >
> > I'm *very* surprised by this and will test it myself...
I've tested it and LLVM is indeed 2x slower to compile,
2009 Nov 19
7
[LLVMdev] Google's Go
On Nov 19, 2009, at 10:25 AM, Jon Harrop wrote:
>
>
>> In this case, the assertion that LLVM is slow is correct: it's
>> definitely slower than a non-optimizing compiler.
>
> I'm *very* surprised by this and will test it myself...
Compared to a compiler in the same category as PCC, whose pinnacle of optimization is doing register allocation? I'm not
2006 May 19
1
imdex.update is 10 times slower than index.add_doc. Normal?
Hi,
I am seeing that
doc = index[''mykey'']
index.update ''mykey'', doc
is about 10 times slower than
doc = Document.new
doc[''id''] = ''mykey''
index << doc
It looks like #update is _much_ slower that #<<. Is it as expected?
Sergei.
--
Posted via http://www.ruby-forum.com/.
2012 Sep 04
4
[LLVMdev] Clang/llvm performance tests on FreeBSD 10.0-CURRENT
Hi all,
I recently performed a series of compiler performance tests on FreeBSD
10.0-CURRENT, particularly comparing gcc 4.2.1 and gcc 4.7.1 against
clang 3.1 and clang 3.2.
The attached text file[1] contains more information about the tests,
some semi-cooked performance data, and my conclusions. Any errors and
omissions are also my fault, so if you notice them, please let me know.
The
2015 Jul 11
2
[LLVMdev] JIT compilation 2-3 times slower in latest LLVM snapshot
If you could create a single executable with a repro, and publish them
somewhere (both the old llvm and new one which is slower), we could
look at it. It's hard to say otherwise unless someone truly knows of a
change.
I'm interested in looking at this, because my experience has been
opposite, compile times improved for my compiler in the past month. So
I'm inclined to understand what
2020 Oct 06
3
Performance Question: Lots of Small Files vs One Large File
Is this a protocol issue? A decade ago I saw writes to small files less
than 16k were awful, because the cost of opening and other file ops dwarfed
writing actual content. So very small files should be slow, but these files
are 10mb each.
Or is this a Windows issue? If so, what's causing the problem?
Just trying to understand.
On Tue, Oct 6, 2020 at 2:39 PM Ralph Boehme <slow at
2007 Jan 11
1
RC16 is a lot slower that RC7
I tried RC16 from RC7 and like other releases since RC7 it is
significantly slower. I'm still trying to figure out why. But I have
again switched back. Do new version take up a lot more ram perhaps? I'm
running with a gig of ram, and I'm running a few other apps. It's not a
fast computer, just an old 754 pin semptron. But the point is that RC7
runs a lot faster than RC16.
2005 Oct 13
1
1.0 alpha3 slower?
Hi
Is 1.0 alpha3 a bit slower than the dovecot-1.0.stable branch?
Regards
Daniel
2009 May 06
3
Wine slower on x64 machines
i Have noticed that wine runs slower on x64 machines why is that? is there any way to overcome it?
2002 Oct 09
1
Why is vp31 codec 2x - 3x slower than DivX codec ?
I have a simple test program which encodes, decodes and displays a single video stream ( at 2-3 fps ). It takes about 15% of CPU time when it uses DivX codec, but 35% when the VP31 codec is active. Is there any way to speed encoding/decoding with VP31 ? Or maybe VP31 is just much slower than DivX ?
Kamil
<p>--- >8 ----
List archives: http://www.xiph.org/archives/
Ogg project homepage:
2015 Jul 11
2
[LLVMdev] JIT compilation 2-3 times slower in latest LLVM snapshot
On 11 July 2015 at 13:14, Caldarale, Charles R
<Chuck.Caldarale at unisys.com> wrote:
>> From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu]
>> On Behalf Of Dibyendu Majumdar
>> Subject: [LLVMdev] JIT compilation 2-3 times slower in latest LLVM snapshot
>
>> I updated my clone of the LLVM github mirror today and I am finding
>> that
2007 Oct 23
4
How to debug slow operation
Hello,
I am no programmer at all. I have dovecot set up on two very similar
machines. On one of the boxes, the dovecot server has recently been
showing slower response times. It feels slower. Where it is slower,
dovecot works as an auth server for postfix. I have replaced exim with
postfix on that box and I think since then dovecot started to slow down.
On the other machine the MTA is exim.