search for: plummets

Displaying 20 results from an estimated 24 matches for "plummets".

Did you mean: plummet
2005 Jun 03
2
S/N on this list is plummeting...
Is it possible to create an alternate list for the people who insist on spending their days posting long-winded political commentary and opinion? Maybe something like centos-offtopic? Reading the list has gone from "fun" status to "chore." Just a suggestion. Cheers, C
2006 Dec 20
2
RE: spandsp 0.0.3 RxFax fax =?ISO-8859-1?Q?_reception crashes bristuffed_asterisk_1=2E2=2E13_[?= Virusgeprüft]
>Does IAXmodem allows you to receive faxes with any extensions >(auto-detecting incoming faxes). You just let Asterisk do the fax detection for you, and when it hears CNG, send it to the fax extension, and your fax extension would just Dial() one of the IAXmodems (using IAX) >>DRi@b-w-computer.de wrote: >> sure in an small office you can use iaxmodem/hylafax to receive faxes
2017 Sep 08
2
cyrus spool on btrfs?
m.roth at 5-cent.us wrote: > Mark Haney wrote: >> On 09/08/2017 09:49 AM, hw wrote: >>> Mark Haney wrote: > <snip> >>> >>> It depends, i. e. I can?t tell how these SSDs would behave if large >>> amounts of data would be written and/or read to/from them over extended >>> periods of time because I haven?t tested that. That isn?t the
2008 Aug 22
2
[LLVMdev] Dependence Analysis [was: Flow-Sensitive AA]
On Aug 22, 2008, at 4:49 PM, John Regehr wrote: > Has anyone quantified the optimizations afforded by undefined signed > overflow? I'd expect that the benefits are minimal for most codes. In most cases, I agree. But for codes that depend heavily on dependence analysis, I would think that being conservative with index expressions would really kill any disambiguation capability and
2011 Mar 04
4
Wine does not allow program to see USB ID
I have a program called Messiah Studio. It licenses to your USB drive. Some people over on their forum have gotten it working under Wine but they go thru so many steps to do it that by the time they are done they are not sure which steps did the trick. So far all we have diduced is that you need Wine 1.3 and you need the drive mounted as low a level as possible. Here is what I have so far.
2017 Sep 08
0
cyrus spool on btrfs?
On 09/08/2017 11:06 AM, hw wrote: > Make a test and replace a software RAID5 with a hardware RAID5.? Even > with > only 4 disks, you will see an overall performance gain.? I?m guessing > that > the SATA controllers they put onto the mainboards are not designed to > handle > all the data --- which gets multiplied to all the disks --- and that the > PCI bus might get
2010 Dec 07
0
[LLVMdev] Inlining and exception handling in LLVM and GCC
On Dec 6, 2010, at 1:58 PM, Duncan Sands wrote: > The poor interaction between exception handling and inlining in LLVM is one of > the main motivations for the new exception handling models proposed recently. > Here I give my analysis of the origin of the problem in the hope of clarifying > the situation. Your analysis coincides with the analysis I made when implementing EH in clang.
2003 Feb 24
4
Vonage
Ahh Mr Carbuyer ... you should have _specified_ you wanted tires with that new car We can still help you though, it will just be an extra $$ above the price we quoted you I understand the concept. I see it in many industries until a company comes along that cares about it's customers I still think that digium is the best buy (for the small scale stuff that I'm interested in anyway) ...
2007 May 05
13
Optimal strategy (add or replace disks) to build a cheap and raidz?
Hello, i have an 8 port sata-controller and i don''t want to spend the money for 8 x 750 GB Sata Disks right now. I''m thinking about an optimal way of building a growing raidz-pool without loosing any data. As far as i know there are two ways to achieve this: - Adding 750 GB Disks from time to time. But this would lead to multiple groups with multiple redundancy/parity disks. I
2014 Jun 02
3
[PATCH] block: virtio_blk: don't hold spin lock during world switch
Jens Axboe <axboe at kernel.dk> writes: > On 2014-05-30 00:10, Rusty Russell wrote: >> Jens Axboe <axboe at kernel.dk> writes: >>> If Rusty agrees, I'd like to add it for 3.16 with a stable marker. >> >> Really stable? It improves performance, which is nice. But every patch >> which goes into the kernel fixes a bug, improves clarity, improves
2014 Jun 02
3
[PATCH] block: virtio_blk: don't hold spin lock during world switch
Jens Axboe <axboe at kernel.dk> writes: > On 2014-05-30 00:10, Rusty Russell wrote: >> Jens Axboe <axboe at kernel.dk> writes: >>> If Rusty agrees, I'd like to add it for 3.16 with a stable marker. >> >> Really stable? It improves performance, which is nice. But every patch >> which goes into the kernel fixes a bug, improves clarity, improves
2014 May 07
0
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
Locking is always an issue in a virtualized environment because of 2 different types of problems: 1) Lock holder preemption 2) Lock waiter preemption One solution to the lock waiter preemption problem is to allow unfair lock in a virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is scheduled out. A simple unfair lock
2004 Mar 27
5
Cisco 7960 SIP Images
What you and so may others on this lise seem to forget is that Cisco is a company offering bsuiness products for businesses. Businesses typically pay by check and wire transfer, especially for items such as this. If you want home-user pay-by-credit-card service, buy products from Belkin's home line and similar. Oh...what's that? None of these cheesy Stocked-at-Costco hardware
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
...). But I hope that this means no metadata was hurt so far. 3) I''ve tried importing the pool in several ways (including normal and rollback mounts, readonly and "-n"), but so far all attempts led to to the computer hanging within a minute ("vmstat 1" shows that free RAM plummets towards the zero mark). I''ve tried preparing the system tunables as well: :; echo "aok/W 1" | mdb -kw :; echo "zfs_recover/W 1" | mdb -kw and sometimes adding: :; echo zfs_vdev_max_pending/W0t5 | mdb -kw :; echo zfs_resilver_delay/W0t0 | mdb -kw :; echo zfs_resilver_...
2011 Jul 25
3
gluster client performance
Hi- I'm new to Gluster, but am trying to get it set up on a new compute cluster we're building. We picked Gluster for one of our cluster file systems (we're also using Lustre for fast scratch space), but the Gluster performance has been so bad that I think maybe we have a configuration problem -- perhaps we're missing a tuning parameter that would help, but I can't find
2006 Aug 15
7
XFS and CentOS 4.3
Hi All, after looking around for info on XFS(the filesystem) and its use on CentOS and/or RHEL 4. There seems to be a lot of noise about 4K Stacks (especially on linux-xfs at oss.sgi.com). So what is the best way to get XFS working with CentOS 4.3 ? And not have something like this happening. A quote from the xfs list at sgi >On Tue, 18 Jul 2006 at 10:29am, Andrew Elwell wrote >
2012 Jun 06
24
Occasional storm of xcalls on segkmem_zio_free
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I''m occasionally seeing a storm of xcalls on one of the 32 VCPUs (>100000 xcalls a second). The machine is pretty much idle, only receiving a bunch of multicast video streams and
2010 Dec 06
4
[LLVMdev] Inlining and exception handling in LLVM and GCC
The poor interaction between exception handling and inlining in LLVM is one of the main motivations for the new exception handling models proposed recently. Here I give my analysis of the origin of the problem in the hope of clarifying the situation. Soon after dwarf exception handling was implemented in LLVM, I noticed that some programs would fail when compiled at -O3, for example the
2006 Jul 31
20
ZFS vs. Apple XRaid
...and does see its fair share of I/O, while the Solaris NFS share is only mounted on this one client.) Alright, so what''s my beef? Well, here''s the fun part: when I try to actually use this NFS share as my home directory (as I do with the IRIX NFS mount), then somehow performance plummets. Reading my inbox (~/.mail) will take around 20 seconds (even though it has only 60 messages in it). When I try to run ''ktrace -i mutt'' with the ktrace output going to the NFS share, then everything crawls to a halt. While that command is running, even a simple ''ls -la&...
2006 Jun 27
28
Supporting ~10K users on ZFS
OK, I know that there''s been some discussion on this before, but I''m not sure that any specific advice came out of it. What would the advice be for supporting a largish number of users (10,000 say) on a system that supports ZFS? We currently use vxfs and assign a user quota, and backups are done via Legato Networker. >From what little I currently understand, the general