similar to: Tuning system response degradation under heavy ext3/2 activity.

Displaying 20 results from an estimated 1600 matches similar to: "Tuning system response degradation under heavy ext3/2 activity."

2004 Feb 05
3
increasing ext3 or io responsiveness
Our Invoice posting routine (intensive harddrive io) freezes every few seconds to flush the cache. Reading this: https://listman.redhat.com/archives/ext3-users/2002-November/msg00070.html I decided to try: # elvtune -r 2048 -w 131072 /dev/sda # echo "90 500 0 0 600000 600000 95 20 0" >/proc/sys/vm/bdflush # run_post_routine # elvtune -r 128 -w 512 /dev/sda # echo "30 500 0 0
2007 Jan 06
2
Disk Elevator
Can anyone explain how the disk elevator works and if there is anyway to tweak it? I have an email server which likely has a large number of read and write requests and was wandering if there was anyway to improve performance. Matt
2004 Sep 11
2
External journal on flash drive
Hi, I'd like to use a flash drive as a journal device, with the purpose of keeping the main disk drive spun down as long as possible. I have a couple of questions: 1) Does the journaling code spread write accesses to the journal device evenly, as I hope, or are there blocks that are particularly "hot"? I.e., do I have to worry about the flash device dying quickly because of
2006 Oct 21
0
CentOS 3 - I/O performance with Promise HW RAID
(Suggestions for other forums in which to post this question are welcome.) We have CentOS 3.6 x86_64 running on a server with dual 2.2GHz Opterons and a Promise UltraTrak RM8000 connected via an Adaptec SCSI card. We are seeing what seems to be gradual I/O performance degradation over time; it seems to be OK for up to about 90 days, but not long after that both CPUs end up continuously spending
2005 Oct 26
1
which process & file taking up disk IO?
I'm having load problems on a server. The bottleneck appears to be disk IO. iostat would show ~100, under %util, during peak usage. i'm running things like clam antivirus, pop, exim, apache, mysql on the server. is there a way to check which process and which file is taking up disk IO? or see what is being written to the disk? i'm very puzzled as the amount of writes is 10 times
2002 Nov 21
2
/proc/sys/vm/bdflush
I'm lacking some understanding of how to tune / when to tune /proc/sys/vm/bdflush Where can I read up on this? Our current problem: Load is low, but ever so often the system decides to do some serious disk I/O which causes all processes to wait for disk I/O -- load explodes (rises linear up into the 20-30ies) just to fall linearly (spelling?) right after that. We think there might be some
2005 May 20
1
Degradation model
Dear list, I have a degradation model: dX/dt = * I(X2)*( k1*X(t) )/( X(t)+k2 ) where X(t) is concentration at time t, and k1 and k2 are parameters that I want to estimate. I(X) is a known inhibitor function. My questions is whether this is implemented or easily computed in any R package. I have searched the archives but without luck. Any help or comments on this would be appreciated, Klaus
2017 May 22
2
network performance degradation in virtio_net in 4.12-rc
Hi I see severe network performance degradation with the kernels 4.12-rc1 and 4.12-rc2 in the network virtio driver. Download rate drops down to about 100kB/s. I bisected it and it is caused by patch d85b758f72b05a774045545f24d70980e3e9aac4 ("virtio_net: fix support for small rings"). When I revert this patch, the problem goes away. The host is Debian Jessie with kernel 4.4.62,
2017 May 22
2
network performance degradation in virtio_net in 4.12-rc
Hi I see severe network performance degradation with the kernels 4.12-rc1 and 4.12-rc2 in the network virtio driver. Download rate drops down to about 100kB/s. I bisected it and it is caused by patch d85b758f72b05a774045545f24d70980e3e9aac4 ("virtio_net: fix support for small rings"). When I revert this patch, the problem goes away. The host is Debian Jessie with kernel 4.4.62,
2009 Jul 06
1
Performance degradation on multi-processor system
Hi, We are seeing performance degradation when running the same R script in multiple instances of R on a multi-processor system. We are a bit surprised by this because we figured that each instance of R is running in its own processor, and therefore running a second, third or fourth instance should not affect the performance of the first instance. Here's a test script that exhibits this
2005 Jul 26
0
Call quality degradation after time
Thanks for the reply, Adam. If this is the case, it would seem to me (because the degradation happens only after a period of time, and quite suddenly) that the issue lies with digium's implementation of g729. As an interesting note, I had the same problems using ulaw -> ulaw over the local network (from internal phone to internal phone) with a much shorter period of 'good
2006 Mar 20
0
print server degradation
Did somebody got to put 1.000 printing queues in a Linux+Samba+Cups production server without degradation? Thanks for any reply, Bruno Gomes Pessanha
2014 Mar 11
0
VGA passthrough with Xen 4.3 and xl toolstack - performance degradation resolved?
Hello, Hope you can help. A while ago users noted performance degradation or dom0 stability issues when shuting down a HVM guest that uses VGA passthrough (e.g. Windows 7), and booting up the guest again. A workaround was to eject the graphics card within Windows, before shutting down the guest. This process is described here: http://blog.ktz.me/?p=219. I tried to follow those instructions, but
2020 Feb 26
0
Quality degradation with 1.3.1 when using FEC
Hi, I noticed that in some scenarios, Opus 1.2.1 produces better quality than 1.3.1 does. In the use case here, I'm enabling FEC and "transcode" signals from telephony networks (PCMU, 8kHz sampling) to VoIP (48kHz here). In this case, Opus always produced some leakage/ringing above 4kHz but for 1.3.1, these artifacts became worse. The small script below can be used to demonstrate
2020 Feb 21
0
Quality degradation with 1.3.1 when using FEC
Hi, I noticed that in some scenarios, Opus 1.2.1 produces better quality than 1.3.1 does. In the use case here, I'm enabling FEC and "transcode" signals from telephony networks (PCMU, 8kHz sampling) to VoIP (48kHz here). In this case, Opus always produced some leakage/ringing above 4kHz but for 1.3.1, these artifacts became worse. The small script below can be used to demonstrate
2012 Mar 06
0
[LLVMdev] Performance degradation when repeatedly exchanging JITted functions
On Tue, Mar 06, 2012 at 04:29:28PM +0100, Clemens Hammacher wrote: > I think a solution would be to always call a function through it's > stub, so that there is a single location to update when the function > is exchanged. This would mean that there is always exactly one level > of indirection, which is worse for programs that don't exchange > functions at runtime, but is
2017 May 22
0
network performance degradation in virtio_net in 4.12-rc
On Mon, May 22, 2017 at 10:25:19AM -0400, Mikulas Patocka wrote: > Hi > > I see severe network performance degradation with the kernels 4.12-rc1 and > 4.12-rc2 in the network virtio driver. Download rate drops down to about > 100kB/s. > > I bisected it and it is caused by patch > d85b758f72b05a774045545f24d70980e3e9aac4 ("virtio_net: fix support for > small
2007 Jul 12
0
Quality degradation on new versions
Hi Aviv, Does the audio sound bad? You can try turning off the highpass filter (which was not in 1.0.5). This code is from ti/testenc-TI-C5x.c in the source tree: /* Turn this off if you want to measure SNR (on by default) */ tmp=0; speex_encoder_ctl(st, SPEEX_SET_HIGHPASS, &tmp); speex_decoder_ctl(dec, SPEEX_SET_HIGHPASS, &tmp); - Jim ----- Original Message ----- From:
2014 Mar 09
0
Massive BTRFS performance degradation
I am experiencing massive performance degradation on my BTRFS root partition on SSD. Except for regular daily updates, nothing changed in the system. The mount point remained the same: / btrfs rw,noatime,compress=lzo,ssd,space_cache,autodefrag 0 0 but the performance dropped to less than 8% of norm. Before: # dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out
2012 Mar 06
0
[LLVMdev] Performance degradation when repeatedly exchanging JITted functions
On Tue, Mar 06, 2012 at 04:09:36PM +0000, James Molloy wrote: > Surely you need to patch *all* functions, not just the initial? Depends on whether you always link to the original address or not. If you use link with the latest address, you have to patch all versions to point to the latest, otherwise you can just patch the first. Advantage of using the latest address: one saved jmp per call.