Displaying 20 results from an estimated 24 matches for "plummet".
Did you mean:
plummer
2005 Jun 03
2
S/N on this list is plummeting...
Is it possible to create an alternate list for the people who insist on
spending their days posting long-winded political commentary and
opinion? Maybe something like centos-offtopic?
Reading the list has gone from "fun" status to "chore."
Just a suggestion.
Cheers,
C
2006 Dec 20
2
RE: spandsp 0.0.3 RxFax fax =?ISO-8859-1?Q?_reception crashes bristuffed_asterisk_1=2E2=2E13_[?= Virusgeprüft]
...p,
but reception reliability in my specific installation was poor) I bit the
bullet and put in a separate Hylafax server connected to my Asterisk box
with a crossover cable, rolled up my sleeves, and stated making IAXmodems -
1 per user. I am at over 200 IAXmodem's, and my failure rate on faxes
plummeted to about .8 % - more than comparable to a regular fax machine.
AFAIC, Hylafax + IAXmodem is the way to go for anything serious, unless we
are talking about thousands of users and thousands of faxes per day. I don't
even know what could be scaled to that scenario and not be unmanageable.
2017 Sep 08
2
cyrus spool on btrfs?
m.roth at 5-cent.us wrote:
> Mark Haney wrote:
>> On 09/08/2017 09:49 AM, hw wrote:
>>> Mark Haney wrote:
> <snip>
>>>
>>> It depends, i. e. I can?t tell how these SSDs would behave if large
>>> amounts of data would be written and/or read to/from them over extended
>>> periods of time because I haven?t tested that. That isn?t the
2008 Aug 22
2
[LLVMdev] Dependence Analysis [was: Flow-Sensitive AA]
On Aug 22, 2008, at 4:49 PM, John Regehr wrote:
> Has anyone quantified the optimizations afforded by undefined signed
> overflow? I'd expect that the benefits are minimal for most codes.
In most cases, I agree. But for codes that depend heavily on
dependence analysis, I would think that being conservative with index
expressions would really kill any disambiguation capability and
2011 Mar 04
4
Wine does not allow program to see USB ID
I have a program called Messiah Studio. It licenses to your USB drive. Some people over on their forum have gotten it working under Wine but they go thru so many steps to do it that by the time they are done they are not sure which steps did the trick. So far all we have diduced is that you need Wine 1.3 and you need the drive mounted as low a level as possible.
Here is what I have so far.
2017 Sep 08
0
cyrus spool on btrfs?
...small writes are added to the battery-backed cache on the
card and the OS considers them complete.? However, on many cards, if the
system writes data to the card faster than the card writes to disks, the
cache will fill up, and at that point, the system performance can
suddenly and unexpectedly plummet.? I've fun a few workloads where that
happened, and we had to replace the system entirely, and use software
RAID instead.? Software RAID's performance tends to be far more
predictable as the workload increases.
Outside of microbenchmarks like bonnie++, software RAID often offers
much b...
2010 Dec 07
0
[LLVMdev] Inlining and exception handling in LLVM and GCC
...there are actually two major semantic effects
tied to call boundaries.
The first is that function return implicitly releases stack-allocated memory;
the pathological case here is a recursive function that calls a helper with
huge stack variables, where inlining the helper makes max recursion
depth plummet. Currently the inliner makes some effort to mitigate this
impact, but mostly by sharing allocas between different inlined functions.
The second is that inlining changes the behavior of anything that wants to
manually walk the call stack, and while most stack walkers don't rely on any
frame-sp...
2003 Feb 24
4
Vonage
Ahh Mr Carbuyer ... you should have _specified_ you wanted tires with that new car
We can still help you though, it will just be an extra $$ above the price we quoted you
I understand the concept. I see it in many industries until a company comes along
that cares about it's customers
I still think that digium is the best buy (for the small scale stuff that I'm
interested in anyway) ...
2007 May 05
13
Optimal strategy (add or replace disks) to build a cheap and raidz?
Hello,
i have an 8 port sata-controller and i don''t want to spend the money for 8 x 750 GB Sata Disks
right now. I''m thinking about an optimal way of building a growing raidz-pool without loosing
any data.
As far as i know there are two ways to achieve this:
- Adding 750 GB Disks from time to time. But this would lead to multiple groups with multiple
redundancy/parity disks. I
2014 Jun 02
3
[PATCH] block: virtio_blk: don't hold spin lock during world switch
Jens Axboe <axboe at kernel.dk> writes:
> On 2014-05-30 00:10, Rusty Russell wrote:
>> Jens Axboe <axboe at kernel.dk> writes:
>>> If Rusty agrees, I'd like to add it for 3.16 with a stable marker.
>>
>> Really stable? It improves performance, which is nice. But every patch
>> which goes into the kernel fixes a bug, improves clarity, improves
2014 Jun 02
3
[PATCH] block: virtio_blk: don't hold spin lock during world switch
Jens Axboe <axboe at kernel.dk> writes:
> On 2014-05-30 00:10, Rusty Russell wrote:
>> Jens Axboe <axboe at kernel.dk> writes:
>>> If Rusty agrees, I'd like to add it for 3.16 with a stable marker.
>>
>> Really stable? It improves performance, which is nice. But every patch
>> which goes into the kernel fixes a bug, improves clarity, improves
2014 May 07
0
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
..._raw_spin_lock() whereas the disk-ext4 workload spent 57.8% of CPU
time in _raw_spin_lock(). It can be seen that there wasn't too much
difference in performance with low spinlock contention in the disk-xfs
workload. With heavy spinlock contention, the performance of simple
test-and-set lock can plummet when compared with the ticket and
queue spinlocks.
Unfair lock in a native environment is generally not a good idea as
there is a possibility of lock starvation for a heavily contended lock.
This patch adds a new configuration option for the x86 architecture
to enable the use of unfair queue spin...
2004 Mar 27
5
Cisco 7960 SIP Images
What you and so may others on this lise seem to forget is that Cisco is a company offering bsuiness products for businesses. Businesses typically pay by check and wire transfer, especially for items such as this.
If you want home-user pay-by-credit-card service, buy products from Belkin's home line and similar.
Oh...what's that? None of these cheesy Stocked-at-Costco hardware
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
...).
But I hope that this means no metadata was hurt so far.
3) I''ve tried importing the pool in several ways (including
normal and rollback mounts, readonly and "-n"), but so far
all attempts led to to the computer hanging within a minute
("vmstat 1" shows that free RAM plummets towards the zero
mark).
I''ve tried preparing the system tunables as well:
:; echo "aok/W 1" | mdb -kw
:; echo "zfs_recover/W 1" | mdb -kw
and sometimes adding:
:; echo zfs_vdev_max_pending/W0t5 | mdb -kw
:; echo zfs_resilver_delay/W0t0 | mdb -kw
:; echo zfs_resilver...
2011 Jul 25
3
gluster client performance
Hi-
I'm new to Gluster, but am trying to get it set up on a new compute
cluster we're building. We picked Gluster for one of our cluster file
systems (we're also using Lustre for fast scratch space), but the
Gluster performance has been so bad that I think maybe we have a
configuration problem -- perhaps we're missing a tuning parameter that
would help, but I can't find
2006 Aug 15
7
XFS and CentOS 4.3
Hi All, after looking around for info on XFS(the filesystem) and its use
on CentOS and/or RHEL 4. There seems to be a lot of noise about 4K
Stacks (especially on linux-xfs at oss.sgi.com).
So what is the best way to get XFS working with CentOS 4.3 ? And not
have something like this happening.
A quote from the xfs list at sgi
>On Tue, 18 Jul 2006 at 10:29am, Andrew Elwell wrote
>
2012 Jun 06
24
Occasional storm of xcalls on segkmem_zio_free
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I''m
occasionally seeing a storm of xcalls on one of the 32 VCPUs (>100000
xcalls a second). The machine is pretty much idle, only receiving a
bunch of multicast video streams and
2010 Dec 06
4
[LLVMdev] Inlining and exception handling in LLVM and GCC
The poor interaction between exception handling and inlining in LLVM is one of
the main motivations for the new exception handling models proposed recently.
Here I give my analysis of the origin of the problem in the hope of clarifying
the situation.
Soon after dwarf exception handling was implemented in LLVM, I noticed that some
programs would fail when compiled at -O3, for example the
2006 Jul 31
20
ZFS vs. Apple XRaid
...and does see
its fair share of I/O, while the Solaris NFS share is only mounted on
this one client.)
Alright, so what''s my beef? Well, here''s the fun part: when I try to
actually use this NFS share as my home directory (as I do with the IRIX
NFS mount), then somehow performance plummets. Reading my inbox (~/.mail)
will take around 20 seconds (even though it has only 60 messages in it).
When I try to run ''ktrace -i mutt'' with the ktrace output going to the
NFS share, then everything crawls to a halt. While that command is
running, even a simple ''ls -la...
2006 Jun 27
28
Supporting ~10K users on ZFS
OK, I know that there''s been some discussion on this before, but I''m not sure that any specific advice came out of it. What would the advice be for supporting a largish number of users (10,000 say) on a system that supports ZFS? We currently use vxfs and assign a user quota, and backups are done via Legato Networker.
>From what little I currently understand, the general