search for: 10x

Displaying 20 results from an estimated 430 matches for "10x".

Did you mean: 0x
2006 Mar 27
2
[LLVMdev] PR723: Default To Optimized Build
...ake ENABLE_OPTIMIZED=1 >> ENABLE_ASSERTIONS=1' that provides this. > > How does this compare size and performance wise with a debug build or a > release build? I haven't done any scientific measurements. Here is some educated guessing :) I know that a debug build is about 10x bigger and 10x slower than a release build. Linking also takes about 10x as long with a debug build than it does for a release build. With assertions on, linking takes virtually the same time as a release build, I would expect it to also be about the same speed as a release build: a WAG would...
2012 Apr 06
3
Outlook (2010) -> Dovecot (IMAP) >10x slower with high network load and many folders
Hi, I am seeing a >10x as slow performance when trying to complete a "send/receive" from an Outlook 2010 client to Dovecot via IMAP, but only when the LAN is fully loaded with other traffic, EG file copying. It seems the problem is when outlook is trying to identify folders that have changed since last "se...
2010 May 31
2
mirror writes 10x slower than individual writes
...erred in 98.236700 secs (15935360 bytes/sec) Drive 2: Western Digital 1.5TB green drive: 1565437216 bytes transferred in 71.745737 secs (21819237 bytes/sec) However, when the two drives were mirrored, after all resilvering completed and there was no background I/O, the write performance was about 10x worse. Watching `zpool iostat -v 2` I could see that quite often drive 1 would write a big chunk of data and then wait for ages for drive 2 to write the same data to disc. Could it be that there is a separate cache for the mirror that was stalling waiting on the cache for the larger drive?? Woul...
2005 Jan 13
0
Re: Budgetone 10x & mwi
asterisk-users-request@lists.digium.com is believed to have said: >Ronald, it's the context listed in voicemail.conf (I got caught on this >as well) > >I really wish Asterisk was better documented; it's bullshit the way it >stands at the moment. > > >Cheers, >Dean > Dean, so if I have two contexts defined in voicemail.conf, like: [general] [local]
2009 Feb 08
0
rails-dev-boost - make Rails in dev. mode 10x times faster
Hey guys, I''m a coauthor of a Rails plugin that patches a few things to make development mode almost as fast as production mode (while still reloading classes and templates). Check it out on GitHub: https://github.com/thedarkone/rails-dev-boost Unfortunately it is only compatible with Rails 2.2 (checkout the master branch of the plugin) or Rails 2.3 i.e. v2.3.0 tag (checkout the
2006 Mar 27
0
[LLVMdev] PR723: Default To Optimized Build
...NS=1' that provides this. >> >> How does this compare size and performance wise with a debug build >> or a >> release build? > > I haven't done any scientific measurements. Here is some educated > guessing :) > > I know that a debug build is about 10x bigger and 10x slower than a > release build. Linking also takes about 10x as long with a debug > build than it does for a release build. > > With assertions on, linking takes virtually the same time as a > release build, I would expect it to also be about the same speed as...
2009 Apr 13
6
Memcached 1.6.5 (Rails 2.3) 10x slower
The move to memcached_client 1.6.5 in Rails 2.3 seems to have made the Rails cache about 10x slower. Since that''s the opposite effect I would expect, I was hoping somebody would explain where I''m misreading these numbers? I noticed my fragment caching was slow -- it shouldn''t take 2ms just to read a 2k string from a localhost memcached server: Cached fragment h...
2003 Mar 04
2
e2fsck on ext3 is 10x slower than ext2
Hi. I'm using Debian. Is this a Redhat-only list, or is it only hosted by RedHat? I recently changed my filing systems over to ext3, but deliberately left the forced boot check parameters alone so my system checks after 20 mounts. I notice that the fsck takes a good ten times longer than under ext2, to perform the cleanly unmounted check. (On the occasion where I did unmount dirtily, the
2006 Mar 27
2
[LLVMdev] PR723: Default To Optimized Build
...this. >>> >>> How does this compare size and performance wise with a debug build or a >>> release build? >> >> I haven't done any scientific measurements. Here is some educated guessing >> :) >> >> I know that a debug build is about 10x bigger and 10x slower than a release >> build. Linking also takes about 10x as long with a debug build than it >> does for a release build. >> >> With assertions on, linking takes virtually the same time as a release >> build, I would expect it to also be about th...
2007 Apr 02
5
[LLVMdev] CVS Branches To Discard?
All, We are considering removing some branches and tags in the conversion process from CVS to SVN. We don't want to do this in a vaccuum, so please read carefully. A deficiency in the cvs2svn script causes it to bloat the Subversion repository (significantly, as in 10x) in the conversion of branches and tags. We can minimize the impact of this by only keeping branches and tags that we really need. This gives us an opportunity to clean up some "dead wood" too. Obviously, I don't want to do this without your feedback (loosing your needed branch/tag wo...
2011 Jun 27
20
[PATCH 0 of 5] v2: Nested-p2m cleanups and locking changes
This patch series tidies up a few bits ofthe nested p2m code. The main thing it does is reorganize the locking so that most of the changes to nested p2m tables happen only under the p2m lock, and the nestedp2m lock is only needed to reassign p2m tables to new cr3 values. Changes since v1: - a few minor fixes - more sensible flushing policy in p2m_get_nestedp2m() - smoke-tested this time!
2003 Nov 12
1
Power (^) 10x slower in R since version 1.7.1... What next?
OK, I have made a little search about this "problem" that apparently occurs only on Windows platform... (but I am sure most of you are already aware of it): the slow down is due to the adoption of a different algorithm for pow in mingw 3.x. This is motivated by some other changes in mingw. Here is a quote of Danny Smith that did this change: >When mingw changed default FPU settings
2017 Apr 24
8
[PATCH net-next v3 0/5] virtio-net tx napi
...er (github.com/google/neper) as test process. Napi increases single stream throughput, but increases cycle cost. The optimizations bring this down. The previous patchset saw a regression with UDP_STREAM, which does not benefit from cleaning tx interrupts in rx napi. This regression is now gone for 10x, 100x. Remaining difference is higher 1x TCP_STREAM, lower 1x UDP_STREAM. The latest results are with process, rx napi and tx napi affine to the same core. All numbers are lower than the previous patchset. upstream napi TCP_STREAM: 1x: Mbps 27816 39805 Gcycles...
2017 Apr 24
8
[PATCH net-next v3 0/5] virtio-net tx napi
...er (github.com/google/neper) as test process. Napi increases single stream throughput, but increases cycle cost. The optimizations bring this down. The previous patchset saw a regression with UDP_STREAM, which does not benefit from cleaning tx interrupts in rx napi. This regression is now gone for 10x, 100x. Remaining difference is higher 1x TCP_STREAM, lower 1x UDP_STREAM. The latest results are with process, rx napi and tx napi affine to the same core. All numbers are lower than the previous patchset. upstream napi TCP_STREAM: 1x: Mbps 27816 39805 Gcycles...
2003 Jun 23
3
paid-for-print
Hi there, I'm newbie in Samba, so I beg for your excuse if my questions are stupid. Is it possible to use Samba as a "paid-for-print" server? If U know something about another free "paid-for-print" software, please let me know. 10x in advance, -- Julian
2009 Aug 12
2
10x slower merge in mac 2.9.1 vs. 2.9.0 (PR#13890)
Full_Name: Rick Stahlhut Version: 2.9.1 OS: os x 10.5.7 Submission from: (NULL) (128.151.71.23) I upgraded to 2.9.1 today from 2.9.0. I work with large CDC (center for disease control) datasets and start, frequently, with a series of 23 large-ish merges to create the final dataset I work on. I do this each time because (a) R is fast. why not? and b) the datasets occasionally get updated by
2008 Sep 09
1
DTrace and shared memory id
...y { self->key = arg0; self->size = arg1; self->flags = arg2; self->shmgettimestamp = timestamp; self->shmgetvtimestamp = vtimestamp; } fbt::shmget:return / self->shmgettimestamp / { /* operation key shmid size flags cpu wall */ printf("%-10s %-10x %-10x %-10d %-10d %-10d %-10d\n", probefunc, self->key, arg1, self->size, self->flags, vtimestamp - self- >shmgetvtimestamp,...
2004 Jul 06
3
Improving effeciency - better table()?
...I've been using cut2 from Hmisc to divide up the range into a specified number of cells and then using table to count how many observations appear in each cell. > obs <- table(cut2(z.trun, cuts=breaks)) Having done this I've found that the code takes much longer to run - up to 10x as long. Is there a more effecient way of doing this? Anyone have any thoughts? -- SC Simon Cullen Room 3030 Dept. Of Economics Trinity College Dublin Ph. (608)3477 Email cullens at tcd.ie
2006 Mar 27
0
[LLVMdev] PR723: Default To Optimized Build
On Mon, 2006-03-27 at 11:47, Chris Lattner wrote: > On Mon, 27 Mar 2006, John Criswell wrote: > > One consideration to weigh is that a debug build of LLVM provides users with > > more diagnostic information to submit with bug reports (since many bugs are > > caught by assertions, which print a readable stack trace). The tradeoff > > seems to be faster and smaller
2007 Nov 25
2
[LLVMdev] OCaml
...c.edu/pipermail/llvmdev OCaml interface I just rediscovered the OCaml bindings in bindings/ocaml (rather than the ones in test/Bindings/OCaml!). They do indeed look quite complete but I can't find any examples using them. I think a translation of the tutorial would be most welcome and about 10x shorter. ;-) -- Dr Jon D Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/products/?e