search for: stresstests

Displaying 20 results from an estimated 29 matches for "stresstests".

Did you mean: stresstest
2007 Jan 22
2
Large & busy site, NFS with deliver only servers
Timo / Others, I have been working on a new installation for a fairly busy site, and after many weeks of tribulation have come to an architecture Im happy with: 2x Debian (2.6.18) - MXing machines running Postfix / MailScanner / Dovecot-LDA (A slightly patched RC17 for prettier Quota bounces) 2x Debian (2.6.18) - Mail retrieval machines running Dovecot IMAP/POP3 (Currently RC17) 3x Node Isilon
2006 Jun 05
0
[Bug 485] New: Stresstesting ipset crashes kernel
https://bugzilla.netfilter.org/bugzilla/show_bug.cgi?id=485 Summary: Stresstesting ipset crashes kernel Product: ipset Version: 2.2.8 Platform: x86_64 OS/Version: RedHat Linux Status: NEW Severity: major Priority: P2 Component: default AssignedTo: kadlec@netfilter.org ReportedBy:
2006 Jun 06
3
[Bug 485] Stresstesting ipset crashes kernel
https://bugzilla.netfilter.org/bugzilla/show_bug.cgi?id=485 ------- Additional Comments From bugzilla.netfilter@neufeind.net 2006-06-06 02:00 MET ------- I tried to track down the problem meanwhile. It turns out that e.g. in a row of roughly 480 "ipset -A" (nethash) in a row the system once hangs at around 300 executed statements while it hangs around 370 the next time. So this
2012 Apr 21
1
CentOS stresstest - what to use?
Hi all. I currently have a CentOS 5.8 x64 host. I have some info that it is "slow" for end users. I would like to use some tools to make tests of proc/memory/disks. Is there a program suite which you could recommend? Best regards, Rafal.
2004 Jan 02
3
* Stresstool Help required
Hi all, I am trying to write a program that sends SIP requests to asterisk. My aim is to make asterisk record as many voicemails it can at a time. The design of the program is like this: There are two processes: One main process and a child process (No flames pls. I have very little idea about pthreads and dl modules) The main program asks the user to input the number of test instances. When
2015 Nov 14
1
[PATCH v2] pmu: fix queued messages while getting no IRQ
I encountered while stresstesting the reclocking code, that rarely (1 out of 20.000+ requests) we don't get any IRQ in nvkm_pmu_intr. This means we have a queued message on the pmu, but nouveau doesn't read it and waits infinitely in nvkm_pmu_send: if (reply) { wait_event(pmu->recv.wait, (pmu->recv.process == 0)); therefore let us use wait_event_timeout with a 1s timeout frame
2019 Sep 27
2
[RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
Fixes runpm breakage mainly on Nvidia GPUs as they are not able to resume. Works perfectly with this workaround applied. RFC comment: We are quite sure that there is a higher amount of bridges affected by this, but I was only testing it on my own machine for now. I've stresstested runpm by doing 5000 runpm cycles with that patch applied and never saw it fail. I mainly wanted to get a
2018 Sep 07
1
Auth process sometimes stop responding after upgrade
...ndling issues, since there's some weirdness in the code. Although I couldn't figure out exactly why it would go to infinite loop there. But attached a patch that may fix it, if you're able to test. We haven't noticed such infinite looping in other installations or automated director stresstests though.. >> FD 13 is "anon_inode:[eventpoll]" > > What about fd 78? I guess some socket. > > Could you also try two more things when it happens again: > > ltrace -tt -e '*' -o ltrace.log -p <pid> > (My guess this isn't going to be very use...
2015 Nov 14
0
[PATCH] pmu: fix queued messages while getting no IRQ
I encountered while stresstesting the reclocking code, that rarely (1 out of 20.000+ requests) we don't get any IRQ in nvkm_pmu_intr. This means we have a queued message on the pmu, but nouveau doesn't read it and waits infinitely in nvkm_pmu_send: if (reply) { wait_event(pmu->recv.wait, (pmu->recv.process == 0)); therefore let us use wait_event_timeout with a 1s timeout frame
2016 Dec 07
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
We're talking about a bunch of different stuff which is all being conflated. There are 3 issues here that I can see. I'll attempt to summarize what I think is going on: 1. Current patches do a hypercall for each order in the allocator. This is inefficient, but independent from the underlying data structure in the ABI, unless bitmaps are in play, which they aren't. 2. Should we
2016 Dec 07
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
We're talking about a bunch of different stuff which is all being conflated. There are 3 issues here that I can see. I'll attempt to summarize what I think is going on: 1. Current patches do a hypercall for each order in the allocator. This is inefficient, but independent from the underlying data structure in the ABI, unless bitmaps are in play, which they aren't. 2. Should we
2015 Nov 14
0
[PATCH v3] pmu: fix queued messages while getting no IRQ
I encountered while stresstesting the reclocking code, that rarely (1 out of 20.000+ requests) we don't get any IRQ in nvkm_pmu_intr. This means we have a queued message on the pmu, but nouveau doesn't read it and waits infinitely in nvkm_pmu_send: if (reply) { wait_event(pmu->recv.wait, (pmu->recv.process == 0)); therefore let us use wait_event_timeout with a 1s timeout frame
2016 Dec 07
0
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
On Wed, Dec 07, 2016 at 11:54:34AM -0800, Dave Hansen wrote: > We're talking about a bunch of different stuff which is all being > conflated. There are 3 issues here that I can see. I'll attempt to > summarize what I think is going on: > > 1. Current patches do a hypercall for each order in the allocator. > This is inefficient, but independent from the underlying
2016 Mar 01
2
[PATCH 0/2] PMU communications improvements
Both patches should make the communicating with the PMU more stable. Karol Herbst (2): pmu: fix queued messages while getting no IRQ pmu: be more strict about locking drm/nouveau/nvkm/subdev/pmu/base.c | 49 ++++++++++++++++++++++++++++++++------ 1 file changed, 42 insertions(+), 7 deletions(-) -- 2.7.2
2019 Sep 27
0
[RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
[+cc Rafael, Mika, linux-pm] On Fri, Sep 27, 2019 at 04:44:21PM +0200, Karol Herbst wrote: > Fixes runpm breakage mainly on Nvidia GPUs as they are not able to resume. I don't know what runpm is. Some userspace utility? Module parameter? > Works perfectly with this workaround applied. > > RFC comment: > We are quite sure that there is a higher amount of bridges affected by
2001 May 25
4
tinc 1.0pre4 released
Hello everybody, I have just released tinc 1.0pre4. Changes: - New authentication protocol (better security, and faster too). - TCPonly and IndirectData are back (but not fully tested). - Documentation revised, it's really up to date with the released package now. - tincd -K now stores public/private keys in PEM format, but keys of 1.0pre3 can still be used. - Faster and more secure
2019 Sep 30
0
[RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
Fixes state transitions of Nvidia Pascal GPUs from D3cold into higher device states. v2: convert to pci_dev quirk put a proper technical explenation of the issue as a in-code comment RFC comment (copied from last sent): We are quite sure that there is a higher amount of bridges affected by this, but I was only testing it on my own machine for now. I've stresstested runpm by doing 5000
2008 Feb 22
2
Dovecot Sieve scalability
Hi, I just finished setting up a functional design with Dovecot+Sieve and it works like a charm. However I'm having serious doubts about the scalability of this. Here is part of a discussion we're having here: About Dovecot+Sieve. What happens here is that your MTA is configured to pass _all_ email to Dovecot which is configured as a LDA. In practice this means this in the Exim
2016 Dec 09
3
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> > 1. Current patches do a hypercall for each order in the allocator. > > This is inefficient, but independent from the underlying data > > structure in the ABI, unless bitmaps are in play, which they aren't. > > 2. Should we have bitmaps in the ABI, even if they are not in use by the > > guest implementation today? Andrea says they have zero benefits
2016 Dec 09
3
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> > 1. Current patches do a hypercall for each order in the allocator. > > This is inefficient, but independent from the underlying data > > structure in the ABI, unless bitmaps are in play, which they aren't. > > 2. Should we have bitmaps in the ABI, even if they are not in use by the > > guest implementation today? Andrea says they have zero benefits