search for: stresstested

Displaying 20 results from an estimated 29 matches for "stresstested".

Did you mean: stresstest
2007 Jan 22
2
Large & busy site, NFS with deliver only servers
Timo / Others, I have been working on a new installation for a fairly busy site, and after many weeks of tribulation have come to an architecture Im happy with: 2x Debian (2.6.18) - MXing machines running Postfix / MailScanner / Dovecot-LDA (A slightly patched RC17 for prettier Quota bounces) 2x Debian (2.6.18) - Mail retrieval machines running Dovecot IMAP/POP3 (Currently RC17) 3x Node Isilon
2006 Jun 05
0
[Bug 485] New: Stresstesting ipset crashes kernel
https://bugzilla.netfilter.org/bugzilla/show_bug.cgi?id=485 Summary: Stresstesting ipset crashes kernel Product: ipset Version: 2.2.8 Platform: x86_64 OS/Version: RedHat Linux Status: NEW Severity: major Priority: P2 Component: default AssignedTo: kadlec@netfilter.org ReportedBy:
2006 Jun 06
3
[Bug 485] Stresstesting ipset crashes kernel
https://bugzilla.netfilter.org/bugzilla/show_bug.cgi?id=485 ------- Additional Comments From bugzilla.netfilter@neufeind.net 2006-06-06 02:00 MET ------- I tried to track down the problem meanwhile. It turns out that e.g. in a row of roughly 480 "ipset -A" (nethash) in a row the system once hangs at around 300 executed statements while it hangs around 370 the next time. So this
2012 Apr 21
1
CentOS stresstest - what to use?
Hi all. I currently have a CentOS 5.8 x64 host. I have some info that it is "slow" for end users. I would like to use some tools to make tests of proc/memory/disks. Is there a program suite which you could recommend? Best regards, Rafal.
2004 Jan 02
3
* Stresstool Help required
Hi all, I am trying to write a program that sends SIP requests to asterisk. My aim is to make asterisk record as many voicemails it can at a time. The design of the program is like this: There are two processes: One main process and a child process (No flames pls. I have very little idea about pthreads and dl modules) The main program asks the user to input the number of test instances. When
2015 Nov 14
1
[PATCH v2] pmu: fix queued messages while getting no IRQ
I encountered while stresstesting the reclocking code, that rarely (1 out of 20.000+ requests) we don't get any IRQ in nvkm_pmu_intr. This means we have a queued message on the pmu, but nouveau doesn't read it and waits infinitely in nvkm_pmu_send: if (reply) { wait_event(pmu->recv.wait, (pmu->recv.process == 0)); therefore let us use wait_event_timeout with a 1s timeout frame
2019 Sep 27
2
[RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
Fixes runpm breakage mainly on Nvidia GPUs as they are not able to resume. Works perfectly with this workaround applied. RFC comment: We are quite sure that there is a higher amount of bridges affected by this, but I was only testing it on my own machine for now. I've stresstested runpm by doing 5000 runpm cycles with that patch applied and never saw it fail. I mainly wanted to get a discussion going on if that's a feasable workaround indeed or if we need something better. I am also sure, that the nouveau driver itself isn't at fault as I am able to reproduce the s...
2018 Sep 07
1
Auth process sometimes stop responding after upgrade
On 7 Sep 2018, at 19.43, Timo Sirainen <tss at iki.fi> wrote: > > On 7 Sep 2018, at 16.50, Simone Lazzaris <s.lazzaris at interactive.eu <mailto:s.lazzaris at interactive.eu>> wrote: >> >> Some more information: the issue has just occurred, again on an instance without the "service_count = 0" configuration directive on pop3-login. >> >>
2015 Nov 14
0
[PATCH] pmu: fix queued messages while getting no IRQ
I encountered while stresstesting the reclocking code, that rarely (1 out of 20.000+ requests) we don't get any IRQ in nvkm_pmu_intr. This means we have a queued message on the pmu, but nouveau doesn't read it and waits infinitely in nvkm_pmu_send: if (reply) { wait_event(pmu->recv.wait, (pmu->recv.process == 0)); therefore let us use wait_event_timeout with a 1s timeout frame
2016 Dec 07
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
We're talking about a bunch of different stuff which is all being conflated. There are 3 issues here that I can see. I'll attempt to summarize what I think is going on: 1. Current patches do a hypercall for each order in the allocator. This is inefficient, but independent from the underlying data structure in the ABI, unless bitmaps are in play, which they aren't. 2. Should we
2016 Dec 07
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
We're talking about a bunch of different stuff which is all being conflated. There are 3 issues here that I can see. I'll attempt to summarize what I think is going on: 1. Current patches do a hypercall for each order in the allocator. This is inefficient, but independent from the underlying data structure in the ABI, unless bitmaps are in play, which they aren't. 2. Should we
2015 Nov 14
0
[PATCH v3] pmu: fix queued messages while getting no IRQ
I encountered while stresstesting the reclocking code, that rarely (1 out of 20.000+ requests) we don't get any IRQ in nvkm_pmu_intr. This means we have a queued message on the pmu, but nouveau doesn't read it and waits infinitely in nvkm_pmu_send: if (reply) { wait_event(pmu->recv.wait, (pmu->recv.process == 0)); therefore let us use wait_event_timeout with a 1s timeout frame
2016 Dec 07
0
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
On Wed, Dec 07, 2016 at 11:54:34AM -0800, Dave Hansen wrote: > We're talking about a bunch of different stuff which is all being > conflated. There are 3 issues here that I can see. I'll attempt to > summarize what I think is going on: > > 1. Current patches do a hypercall for each order in the allocator. > This is inefficient, but independent from the underlying
2016 Mar 01
2
[PATCH 0/2] PMU communications improvements
Both patches should make the communicating with the PMU more stable. Karol Herbst (2): pmu: fix queued messages while getting no IRQ pmu: be more strict about locking drm/nouveau/nvkm/subdev/pmu/base.c | 49 ++++++++++++++++++++++++++++++++------ 1 file changed, 42 insertions(+), 7 deletions(-) -- 2.7.2
2019 Sep 27
0
[RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
...know what runpm is. Some userspace utility? Module parameter? > Works perfectly with this workaround applied. > > RFC comment: > We are quite sure that there is a higher amount of bridges affected by this, > but I was only testing it on my own machine for now. > > I've stresstested runpm by doing 5000 runpm cycles with that patch applied > and never saw it fail. > > I mainly wanted to get a discussion going on if that's a feasable workaround > indeed or if we need something better. > > I am also sure, that the nouveau driver itself isn't at fault a...
2001 May 25
4
tinc 1.0pre4 released
...Ponly and IndirectData are back (but not fully tested). - Documentation revised, it's really up to date with the released package now. - tincd -K now stores public/private keys in PEM format, but keys of 1.0pre3 can still be used. - Faster and more secure encryption of tunneled packets. - Stresstested to see if it handles large VPNs with more than 100 sites (it does). Again, due to the large changes in the protocols this version does not work together with older versions. However, you don't have to change the configuration files this time. Most of the things we wanted to include in 1.0...
2019 Sep 30
0
[RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
...her device states. v2: convert to pci_dev quirk put a proper technical explenation of the issue as a in-code comment RFC comment (copied from last sent): We are quite sure that there is a higher amount of bridges affected by this, but I was only testing it on my own machine for now. I've stresstested runpm by doing 5000 runpm cycles with that patch applied and never saw it fail. I mainly wanted to get a discussion going on if that's a feasable workaround indeed or if we need something better. I am also sure, that the nouveau driver itself isn't at fault as I am able to reproduce the s...
2008 Feb 22
2
Dovecot Sieve scalability
Hi, I just finished setting up a functional design with Dovecot+Sieve and it works like a charm. However I'm having serious doubts about the scalability of this. Here is part of a discussion we're having here: About Dovecot+Sieve. What happens here is that your MTA is configured to pass _all_ email to Dovecot which is configured as a LDA. In practice this means this in the Exim
2016 Dec 09
3
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> > 1. Current patches do a hypercall for each order in the allocator. > > This is inefficient, but independent from the underlying data > > structure in the ABI, unless bitmaps are in play, which they aren't. > > 2. Should we have bitmaps in the ABI, even if they are not in use by the > > guest implementation today? Andrea says they have zero benefits
2016 Dec 09
3
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> > 1. Current patches do a hypercall for each order in the allocator. > > This is inefficient, but independent from the underlying data > > structure in the ABI, unless bitmaps are in play, which they aren't. > > 2. Should we have bitmaps in the ABI, even if they are not in use by the > > guest implementation today? Andrea says they have zero benefits