Displaying 20 results from an estimated 29 matches for "stresstest".
2007 Jan 22
2
Large & busy site, NFS with deliver only servers
Timo / Others,
I have been working on a new installation for a fairly busy site, and after
many weeks of tribulation have come to an architecture Im happy with:
2x Debian (2.6.18) - MXing machines running Postfix / MailScanner /
Dovecot-LDA (A slightly patched RC17 for prettier Quota bounces)
2x Debian (2.6.18) - Mail retrieval machines running Dovecot IMAP/POP3
(Currently RC17)
3x Node Isilon
2006 Jun 05
0
[Bug 485] New: Stresstesting ipset crashes kernel
https://bugzilla.netfilter.org/bugzilla/show_bug.cgi?id=485
Summary: Stresstesting ipset crashes kernel
Product: ipset
Version: 2.2.8
Platform: x86_64
OS/Version: RedHat Linux
Status: NEW
Severity: major
Priority: P2
Component: default
AssignedTo: kadlec@netfilter.org
ReportedBy: b...
2006 Jun 06
3
[Bug 485] Stresstesting ipset crashes kernel
https://bugzilla.netfilter.org/bugzilla/show_bug.cgi?id=485
------- Additional Comments From bugzilla.netfilter@neufeind.net 2006-06-06 02:00 MET -------
I tried to track down the problem meanwhile.
It turns out that e.g. in a row of roughly 480 "ipset -A" (nethash) in a row the
system once hangs at around 300 executed statements while it hangs around 370
the next time. So this
2012 Apr 21
1
CentOS stresstest - what to use?
Hi all.
I currently have a CentOS 5.8 x64 host. I have some info that it is "slow"
for end users. I would like to use some tools to make tests of
proc/memory/disks.
Is there a program suite which you could recommend?
Best regards,
Rafal.
2004 Jan 02
3
* Stresstool Help required
...Playing 'vm-intro' (language 'en')
-- Playing 'beep' (language 'en')
WARNING[15376]: File app_voicemail.c, Line 1236 (leave_voicemail): No more
messages possible
-- Executing Hangup("SIP/gopi-bddf", "") in new stack
== Spawn extension (stresstest, 7777, 5) exited non-zero on
'SIP/gopi-bddf'
The problem starts when i try to spawn more than one instance of the
process. I tried with 2, both the instances got registered. The initial part
of dialing is also ok. After that one of the child processes gets BYE
request from *. The othe...
2015 Nov 14
1
[PATCH v2] pmu: fix queued messages while getting no IRQ
I encountered while stresstesting the reclocking code, that rarely (1 out of
20.000+ requests) we don't get any IRQ in nvkm_pmu_intr.
This means we have a queued message on the pmu, but nouveau doesn't read it and
waits infinitely in nvkm_pmu_send:
if (reply) {
wait_event(pmu->recv.wait, (pmu->recv.process == 0...
2019 Sep 27
2
[RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
Fixes runpm breakage mainly on Nvidia GPUs as they are not able to resume.
Works perfectly with this workaround applied.
RFC comment:
We are quite sure that there is a higher amount of bridges affected by this,
but I was only testing it on my own machine for now.
I've stresstested runpm by doing 5000 runpm cycles with that patch applied
and never saw it fail.
I mainly wanted to get a discussion going on if that's a feasable workaround
indeed or if we need something better.
I am also sure, that the nouveau driver itself isn't at fault as I am able
to reproduce the...
2018 Sep 07
1
Auth process sometimes stop responding after upgrade
...ndling issues, since there's some weirdness in the code. Although I couldn't figure out exactly why it would go to infinite loop there. But attached a patch that may fix it, if you're able to test. We haven't noticed such infinite looping in other installations or automated director stresstests though..
>> FD 13 is "anon_inode:[eventpoll]"
>
> What about fd 78? I guess some socket.
>
> Could you also try two more things when it happens again:
>
> ltrace -tt -e '*' -o ltrace.log -p <pid>
> (My guess this isn't going to be very us...
2015 Nov 14
0
[PATCH] pmu: fix queued messages while getting no IRQ
I encountered while stresstesting the reclocking code, that rarely (1 out of
20.000+ requests) we don't get any IRQ in nvkm_pmu_intr.
This means we have a queued message on the pmu, but nouveau doesn't read it and
waits infinitely in nvkm_pmu_send:
if (reply) {
wait_event(pmu->recv.wait, (pmu->recv.process == 0...
2016 Dec 07
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
We're talking about a bunch of different stuff which is all being
conflated. There are 3 issues here that I can see. I'll attempt to
summarize what I think is going on:
1. Current patches do a hypercall for each order in the allocator.
This is inefficient, but independent from the underlying data
structure in the ABI, unless bitmaps are in play, which they aren't.
2. Should we
2016 Dec 07
2
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
We're talking about a bunch of different stuff which is all being
conflated. There are 3 issues here that I can see. I'll attempt to
summarize what I think is going on:
1. Current patches do a hypercall for each order in the allocator.
This is inefficient, but independent from the underlying data
structure in the ABI, unless bitmaps are in play, which they aren't.
2. Should we
2015 Nov 14
0
[PATCH v3] pmu: fix queued messages while getting no IRQ
I encountered while stresstesting the reclocking code, that rarely (1 out of
20.000+ requests) we don't get any IRQ in nvkm_pmu_intr.
This means we have a queued message on the pmu, but nouveau doesn't read it and
waits infinitely in nvkm_pmu_send:
if (reply) {
wait_event(pmu->recv.wait, (pmu->recv.process == 0...
2016 Dec 07
0
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
...he code thus far has demonstrated a huge
> benefit without even having a bitmap.
>
> I've got no objections to ripping the bitmap out of the ABI.
I think we need to see a statistic showing the number of bits set in
each bitmap in average, after some uptime and lru churn, like running
stresstest app for a while with I/O and then inflate the balloon and
count:
1) how many bits were set vs total number of bits used in bitmaps
2) how many times bitmaps were used vs bitmap_len = 0 case of single
page
My guess would be like very low percentage for both points.
> Surely we can think of...
2016 Mar 01
2
[PATCH 0/2] PMU communications improvements
Both patches should make the communicating with the PMU more stable.
Karol Herbst (2):
pmu: fix queued messages while getting no IRQ
pmu: be more strict about locking
drm/nouveau/nvkm/subdev/pmu/base.c | 49 ++++++++++++++++++++++++++++++++------
1 file changed, 42 insertions(+), 7 deletions(-)
--
2.7.2
2019 Sep 27
0
[RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
...know what runpm is. Some userspace utility? Module
parameter?
> Works perfectly with this workaround applied.
>
> RFC comment:
> We are quite sure that there is a higher amount of bridges affected by this,
> but I was only testing it on my own machine for now.
>
> I've stresstested runpm by doing 5000 runpm cycles with that patch applied
> and never saw it fail.
>
> I mainly wanted to get a discussion going on if that's a feasable workaround
> indeed or if we need something better.
>
> I am also sure, that the nouveau driver itself isn't at fault...
2001 May 25
4
tinc 1.0pre4 released
...Ponly and IndirectData are back (but not fully tested).
- Documentation revised, it's really up to date with the released package now.
- tincd -K now stores public/private keys in PEM format, but keys of 1.0pre3
can still be used.
- Faster and more secure encryption of tunneled packets.
- Stresstested to see if it handles large VPNs with more than 100 sites (it
does).
Again, due to the large changes in the protocols this version does not work
together with older versions. However, you don't have to change the
configuration files this time.
Most of the things we wanted to include in 1....
2019 Sep 30
0
[RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges
...her device
states.
v2: convert to pci_dev quirk
put a proper technical explenation of the issue as a in-code comment
RFC comment (copied from last sent):
We are quite sure that there is a higher amount of bridges affected by this,
but I was only testing it on my own machine for now.
I've stresstested runpm by doing 5000 runpm cycles with that patch applied
and never saw it fail.
I mainly wanted to get a discussion going on if that's a feasable workaround
indeed or if we need something better.
I am also sure, that the nouveau driver itself isn't at fault as I am able
to reproduce the...
2008 Feb 22
2
Dovecot Sieve scalability
Hi,
I just finished setting up a functional design with Dovecot+Sieve and it
works like a charm.
However I'm having serious doubts about the scalability of this. Here is
part of a discussion we're having here:
About Dovecot+Sieve.
What happens here is that your MTA is configured to pass _all_ email to
Dovecot which is configured as a LDA. In practice this means this in the
Exim
2016 Dec 09
3
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
...ated a huge
> > benefit without even having a bitmap.
> >
> > I've got no objections to ripping the bitmap out of the ABI.
>
> I think we need to see a statistic showing the number of bits set in each
> bitmap in average, after some uptime and lru churn, like running stresstest
> app for a while with I/O and then inflate the balloon and
> count:
>
> 1) how many bits were set vs total number of bits used in bitmaps
>
> 2) how many times bitmaps were used vs bitmap_len = 0 case of single
> page
>
> My guess would be like very low percentage...
2016 Dec 09
3
[Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
...ated a huge
> > benefit without even having a bitmap.
> >
> > I've got no objections to ripping the bitmap out of the ABI.
>
> I think we need to see a statistic showing the number of bits set in each
> bitmap in average, after some uptime and lru churn, like running stresstest
> app for a while with I/O and then inflate the balloon and
> count:
>
> 1) how many bits were set vs total number of bits used in bitmaps
>
> 2) how many times bitmaps were used vs bitmap_len = 0 case of single
> page
>
> My guess would be like very low percentage...