Displaying 20 results from an estimated 354 matches for "starve".
2007 Apr 18
1
[PATCH] Lguest launcher, child starving parent
...the console, it sends a signal to
the parent and then does another select. The problem is that the select
doesn't actually read from the device, and will return immediately since
their is still data pending until it is read. But it's the parent that
reads the data. So the child actually starves the parent from reading
the data by spinning and waiting for it to read the data.
The fix I implemented was to have the child wait for a response from the
parent before going on. Since there was already communication between
the parent and child via a pipe, I used that. This time, the data
return...
2007 Apr 18
1
[PATCH] Lguest launcher, child starving parent
...the console, it sends a signal to
the parent and then does another select. The problem is that the select
doesn't actually read from the device, and will return immediately since
their is still data pending until it is read. But it's the parent that
reads the data. So the child actually starves the parent from reading
the data by spinning and waiting for it to read the data.
The fix I implemented was to have the child wait for a response from the
parent before going on. Since there was already communication between
the parent and child via a pipe, I used that. This time, the data
return...
2008 May 21
0
DRDB Oddness
Hi All,
I''m successfully using drbd on xen dom0''s across a 2 machine cluster.
However I have one domU that refuses to start on one of the machines,
but starts find on the other. Config files for domU and drbd are
identical.
Not sure where to start looking to diagnose the problem
x-host-3:/etc/xen # xm create n-monitor
Using config file "/etc/xen/vm/n-monitor".
2007 Apr 06
1
The best way to protect against starvation?
Hello,
If an ordinary user runs:
-- snip --
cat > starv.c <<EOF
main(){ char *point; while(1) { point = ( char * ) malloc(10000); }}
EOF
cc starv.c
while true
do
./a.out &
done
-- snip --
This will fast starv the operating system (FreeBSD 6.2). I have tried to
limit the number of processes and the amount of memmory consumed (in
login.conf).
There is also a file /etc/malloc.conf
2023 Feb 23
1
[nbdkit PATCH] server: Don't assert on send if client hangs up early
...annoyance (nbdcopy has already exited, and no further client will be
connecting); but for a longer-running nbdkit server accepting parallel
clients, it means any one client can trigger the SIGABRT by
intentionally queueing multiple NBD_CMD_READ then disconnecting early,
and thereby kill nbdkit and starve other clients. Whether it rises to
the level of CVE depends on whether you consider one client being able
to starve others a privilege escalation (if you are not using TLS,
there are other ways for a bad client to starve peers; if you are
using TLS, then the starved client has the same credentials...
2007 Jun 15
0
sangoma WAN boards with lartc
...------------------------
Message: 4
Date: Fri, 15 Jun 2007 10:16:12 +0200
From: Christian Benvenuti <christian.benvenuti@libero.it>
Subject: [LARTC] Re: PQ questions
To: lartc@mailman.ds9a.nl
Message-ID: <1181895372.2702.20.camel@benve-laptop>
Content-Type: text/plain
Hi,
a class is starved only if those with higher priority are
always (of pretty often) backlogged and do not give the lower
priority classes a chance to transmit.
Therefore, if you transmit at a rate smaller than your CPU/s and
NIC/s can handle you will not experience any starving.
For example, if you generate 50Mbit t...
2007 Jun 14
16
PQ questions
Hi all,
First, let me say I''ve been most impressed with how quickly and
professionally people on this list ask and answer questions.
Next, let me say that with which I need help is properly configuring strict
PQ, and gathering certain stats. Specifically:
- I need to create a priority queue with four queues (let''s say they are of
high, medium, normal, and low priority)
- I
2013 Aug 30
17
[PATCH] rwsem: add rwsem_is_contended
...ock on this rwsem while they scan the extent tree, and if need_resched()
they will drop the lock and schedule. The transaction commit needs to take a
write lock for this rwsem for a very short period to switch out the commit
roots. If there are a lot of threads doing this caching operation we can starve
out the committers which slows everybody out. To address this we want to add
this functionality to see if our rwsem has anybody waiting to take a write lock
so we can drop it and schedule for a bit to allow the commit to continue.
Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
---...
2005 May 09
3
how to guarantee 1/numflows bandwidth to each flow dynamically
I am looking for a simple way to guarantee to each flow
going through my traffic control point 1/numflows of
bandwidth. I thought using SFQ would do this effectively
but it appears to be quite unfair: a very high speed
download that fills the pipe easily starves smaller flows to
the point where it becomes unusable (especially if they are
at all interactive)
Because numflows is dynamic, I''m not sure how I would have
the bandwidth allocated to each flow change dynamically and
automatically as flows are added and removed.
Anyone have an idea how t...
2007 May 10
6
PRIO and TBF is much better than HTB??
Hello mailing list,
i stand bevor a mystery and cannot explain it J. I want to do shaping and
prioritization and I have done these following configurations and
simulations. I canĀ“t explain, that the combination of PRIO and TBF is much
better than the HTB (with the prio parameter) alone or in combination with
the SFQ.
Here are my example configurations: 2 Traffic Classes http (80 = 0x50) and
2005 Oct 21
2
Ogg Vorbis bitrate peeling bounty on Launchpad
Hello all,
Just a quick note to let you all know that I have placed a bounty on
Lauchpad to get bitrate peeling added to Vorbis. It is a feature that I
think we would all like to have, and would probably pay something to
get, but it hasn't been done.
My request to you is to add to the bounty. I have seeded it with US$20,
which is not enough to motivate a developer to get it done, but I am a
2018 Apr 24
2
[PATCH] vhost_net: use packet weight for rx handler, too
Similar to commit a2ac99905f1e ("vhost-net: set packet weight of
tx polling to 2 * vq size"), we need a packet-based limit for
handler_rx, too - elsewhere, under rx flood with small packets,
tx can be delayed for a very long time, even without busypolling.
The pkt limit applied to handle_rx must be the same applied by
handle_tx, or we will get unfair scheduling between rx and tx.
Tying
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 11:01-0400, Waiman Long:
> From: Peter Zijlstra <peterz at infradead.org>
>
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.
I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 11:01-0400, Waiman Long:
> From: Peter Zijlstra <peterz at infradead.org>
>
> Because the qspinlock needs to touch a second cacheline; add a pending
> bit and allow a single in-word spinner before we punt to the second
> cacheline.
I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets
2004 May 08
2
PRIO qdisc with HTB
...40: htb qdiscs
/ | \
20:1.... 30:1... 40:1... htb classes
Then, if I''m right, when there is traffic in prio qdisc 1 to fully load link,
users would get it distributed evenly but any low priority connections in
qdiscs 2,3... would starve. Which is exactly what I want.
Problem is, when I add filters to enqueue in HTB "below" prio qdisc, they
aren''t working.
I''m tried to make simplified version first:
tc qdisc add dev eth0 root handle 1: htb
tc class add dev eth0 parent 1: classid 1:1 htb rate 3000kbit...
2018 Feb 26
2
tinc 1.1: missing PONG
...te:
> The problem is not the order of the events, the problem is that in the
> Windows version of the event loop, we only handle one event in each loop
> iteration. The select() loop handles all events that have accumulated so
> far, so regardless of the order it handles them, it never starves fd. At
> least, that was what I thought, until I double checked and found out
> that we actually don't in tinc 1.1 (tinc 1.0 is correct though).
Sure, but changing the order of the events changes which one will
be in that first slot.
> So, we have to find a proper fix for both the P...
2019 May 17
0
[PATCH V2 1/4] vhost: introduce vhost_exceeds_weight()
We used to have vhost_exceeds_weight() for vhost-net to:
- prevent vhost kthread from hogging the cpu
- balance the time spent between TX and RX
This function could be useful for vsock and scsi as well. So move it
to vhost.c. Device must specify a weight which counts the number of
requests, or it can also specific a byte_weight which counts the
number of bytes that has been processed.
2001 Nov 02
7
Entropy and DSA keys
...ms that don't have a decent source of
entropy? Am I misinterpreting those discussions?
We are having a problem deploying sshd (no prngd) where sshd refuses to
start because it says theres not enough available entropy. Would
disabling DSA in sshd prevent the system from becoming "entropy starved"?
If I'm missing the point of the latest discussions, someone please correct
me.... what was the real meaning of those discussions about using DSA keys
in sshd?
Thanks,
Ed
Ed Phillips <ed at udel.edu> University of Delaware (302) 831-6082
Systems Programmer III, Network and Sys...
2008 Feb 05
2
[LLVMdev] Advice on implementing fast per-thread data
...r to where the
page is to be mapped, and just map it in the same place in every thread.
Another possibility, and I'm not sure how to do this in LLVM, would be to
sacrifice a register to hold the pointer to the unique per-thread
structure. This would be worthwhile to me even on the register-starved
x86-32. I suppose I could also just add a "hidden" (compiler-added and
-maintained) argument to every function which is the pointer to the
per-thread data.
Using the normal thread-local storage scares me, because I don't know the
performance implications. Obviously calling a s...
2009 Jul 27
3
I/O load distribution
...disk
I/O at the same time. One example would be the "updatedb" cronjob of the
mlocate package. If you have say 5 VMs running on a physical System with a
local software raid-1 as storage and the all run updatedb at the same time
that causes all of them to run really slowly because the starve each other
fighting over the disk.
What is the best way to soften the impact of such a situation? Does it make
sense to use a hardware raid instead? How would the raid type affect the
performance in this case? Would the fact that the I/O load gets distributed
across multiple spindles in, say,...