search for: starv

Displaying 20 results from an estimated 354 matches for "starv".

Did you mean: start
2007 Apr 18
1
[PATCH] Lguest launcher, child starving parent
Glauber noticed long delays between hitting a key, and seeing data come up on the virtual console. Looking into this, I found that the wake_parent routine that reads from all devices was actually starving out the parent after sending the parent a signal to wake up. The thing is, the child which takes the console input is recognized by the scheduler as an interactive process. The parent, doesn't do so much, so it is recognized more as a CPU hog. So the child easily gets a higher priority tha...
2007 Apr 18
1
[PATCH] Lguest launcher, child starving parent
Glauber noticed long delays between hitting a key, and seeing data come up on the virtual console. Looking into this, I found that the wake_parent routine that reads from all devices was actually starving out the parent after sending the parent a signal to wake up. The thing is, the child which takes the console input is recognized by the scheduler as an interactive process. The parent, doesn't do so much, so it is recognized more as a CPU hog. So the child easily gets a higher priority tha...
2008 May 21
0
DRDB Oddness
...etected. cat /proc/drbd version: 8.0.6 (api:86/proto:86) SVN Revision: 3048 build by phil@mescal, 2007-09-03 10:39:27 0: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r--- ns:67460 nr:42100 dw:109560 dr:276 al:8 bm:13 lo:0 pe:0 ua:0 ap:0 resync: used:0/31 hits:79 misses:13 starving:0 dirty:0 changed:13 act_log: used:0/257 hits:14489 misses:8 starving:0 dirty:0 changed:8 1: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r--- ns:0 nr:349880 dw:349880 dr:0 al:0 bm:7 lo:0 pe:0 ua:0 ap:0 resync: used:0/31 hits:67 misses:7 starving:0 dirty:0 change...
2007 Apr 06
1
The best way to protect against starvation?
Hello, If an ordinary user runs: -- snip -- cat > starv.c <<EOF main(){ char *point; while(1) { point = ( char * ) malloc(10000); }} EOF cc starv.c while true do ./a.out & done -- snip -- This will fast starv the operating system (FreeBSD 6.2). I have tried to limit the number of processes and the amount of memmory consumed (in login.conf)...
2023 Feb 23
1
[nbdkit PATCH] server: Don't assert on send if client hangs up early
...annoyance (nbdcopy has already exited, and no further client will be connecting); but for a longer-running nbdkit server accepting parallel clients, it means any one client can trigger the SIGABRT by intentionally queueing multiple NBD_CMD_READ then disconnecting early, and thereby kill nbdkit and starve other clients. Whether it rises to the level of CVE depends on whether you consider one client being able to starve others a privilege escalation (if you are not using TLS, there are other ways for a bad client to starve peers; if you are using TLS, then the starved client has the same credential...
2007 Jun 15
0
sangoma WAN boards with lartc
...not know what the "priomap" option is > used for) > > >My questions are: > > > >- What if anything is missing/requiring change in my config given the > stated > >requirements? > > Your config does not prevent an higher priority class from starving > a lower priority class. Exactly. That is requirement. > You can prevent it in two different ways (at > least): Don''t want to prevent it right now. > > 1) You can assign a TBF qdisc (Token Bucket) to the PRIO classes > TBF: http://www.lartc.org/howto/lartc.qdi...
2007 Jun 14
16
PQ questions
Hi all, First, let me say I''ve been most impressed with how quickly and professionally people on this list ask and answer questions. Next, let me say that with which I need help is properly configuring strict PQ, and gathering certain stats. Specifically: - I need to create a priority queue with four queues (let''s say they are of high, medium, normal, and low priority) - I
2013 Aug 30
17
[PATCH] rwsem: add rwsem_is_contended
...ock on this rwsem while they scan the extent tree, and if need_resched() they will drop the lock and schedule. The transaction commit needs to take a write lock for this rwsem for a very short period to switch out the commit roots. If there are a lot of threads doing this caching operation we can starve out the committers which slows everybody out. To address this we want to add this functionality to see if our rwsem has anybody waiting to take a write lock so we can drop it and schedule for a bit to allow the commit to continue. Thanks, Signed-off-by: Josef Bacik <jbacik@fusionio.com> --...
2005 May 09
3
how to guarantee 1/numflows bandwidth to each flow dynamically
I am looking for a simple way to guarantee to each flow going through my traffic control point 1/numflows of bandwidth. I thought using SFQ would do this effectively but it appears to be quite unfair: a very high speed download that fills the pipe easily starves smaller flows to the point where it becomes unusable (especially if they are at all interactive) Because numflows is dynamic, I''m not sure how I would have the bandwidth allocated to each flow change dynamically and automatically as flows are added and removed. Anyone have an idea how...
2007 May 10
6
PRIO and TBF is much better than HTB??
Hello mailing list, i stand bevor a mystery and cannot explain it J. I want to do shaping and prioritization and I have done these following configurations and simulations. I canĀ“t explain, that the combination of PRIO and TBF is much better than the HTB (with the prio parameter) alone or in combination with the SFQ. Here are my example configurations: 2 Traffic Classes http (80 = 0x50) and
2005 Oct 21
2
Ogg Vorbis bitrate peeling bounty on Launchpad
...rate peeling added to Vorbis. It is a feature that I think we would all like to have, and would probably pay something to get, but it hasn't been done. My request to you is to add to the bounty. I have seeded it with US$20, which is not enough to motivate a developer to get it done, but I am a starving student with very little spare cash! If just a few from on here added $10 or something then this feature may finally be implemented and be more than just a 'possibility'. https://launchpad.net/bounties/ogg-vorbis-bitrate-peeling Regards, Aaron -- http://www.whitehouse.org.nz
2018 Apr 24
2
[PATCH] vhost_net: use packet weight for rx handler, too
...100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -46,8 +46,10 @@ MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;" #define VHOST_NET_WEIGHT 0x80000 /* Max number of packets transferred before requeueing the job. - * Using this limit prevents one virtqueue from starving rx. */ -#define VHOST_NET_PKT_WEIGHT(vq) ((vq)->num * 2) + * Using this limit prevents one virtqueue from starving others with small + * pkts. + */ +#define VHOST_NET_PKT_WEIGHT 256 /* MAX number of TX used buffers for outstanding zerocopy */ #define VHOST_MAX_PEND 128 @@ -587,7 +589,7 @@...
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
...we have PLE and lock holder isn't running [1] - the hypervisor randomly preempts us 3) Lock holder unlocks while pending VCPU is waiting in queue. 4) Subsequent lockers will see free lock with set pending bit and will loop in trylock's 'for (;;)' - the worst-case is lock starving [2] - PLE can save us from wasting whole timeslice Retry threshold is the easiest solution, regardless of its ugliness [4]. Another minor design flaw is that formerly first VCPU gets appended to the tail when it decides to queue; is the performance gain worth it? Thanks. --- 1: Pause Lo...
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
...we have PLE and lock holder isn't running [1] - the hypervisor randomly preempts us 3) Lock holder unlocks while pending VCPU is waiting in queue. 4) Subsequent lockers will see free lock with set pending bit and will loop in trylock's 'for (;;)' - the worst-case is lock starving [2] - PLE can save us from wasting whole timeslice Retry threshold is the easiest solution, regardless of its ugliness [4]. Another minor design flaw is that formerly first VCPU gets appended to the tail when it decides to queue; is the performance gain worth it? Thanks. --- 1: Pause Lo...
2004 May 08
2
PRIO qdisc with HTB
...40: htb qdiscs / | \ 20:1.... 30:1... 40:1... htb classes Then, if I''m right, when there is traffic in prio qdisc 1 to fully load link, users would get it distributed evenly but any low priority connections in qdiscs 2,3... would starve. Which is exactly what I want. Problem is, when I add filters to enqueue in HTB "below" prio qdisc, they aren''t working. I''m tried to make simplified version first: tc qdisc add dev eth0 root handle 1: htb tc class add dev eth0 parent 1: classid 1:1 htb rate 3000kbit...
2018 Feb 26
2
tinc 1.1: missing PONG
...te: > The problem is not the order of the events, the problem is that in the > Windows version of the event loop, we only handle one event in each loop > iteration. The select() loop handles all events that have accumulated so > far, so regardless of the order it handles them, it never starves fd. At > least, that was what I thought, until I double checked and found out > that we actually don't in tinc 1.1 (tinc 1.0 is correct though). Sure, but changing the order of the events changes which one will be in that first slot. > So, we have to find a proper fix for both the...
2019 May 17
0
[PATCH V2 1/4] vhost: introduce vhost_exceeds_weight()
...scsi.c index 618fb64..d830579 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -57,6 +57,12 @@ #define VHOST_SCSI_PREALLOC_UPAGES 2048 #define VHOST_SCSI_PREALLOC_PROT_SGLS 2048 +/* Max number of requests before requeueing the job. + * Using this limit prevents one virtqueue from starving others with + * request. + */ +#define VHOST_SCSI_WEIGHT 256 + struct vhost_scsi_inflight { /* Wait for the flush operation to finish */ struct completion comp; @@ -1622,7 +1628,8 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) vqs[i] = &vs->vqs[i].vq; vs-&gt...
2001 Nov 02
7
Entropy and DSA keys
...ms that don't have a decent source of entropy? Am I misinterpreting those discussions? We are having a problem deploying sshd (no prngd) where sshd refuses to start because it says theres not enough available entropy. Would disabling DSA in sshd prevent the system from becoming "entropy starved"? If I'm missing the point of the latest discussions, someone please correct me.... what was the real meaning of those discussions about using DSA keys in sshd? Thanks, Ed Ed Phillips <ed at udel.edu> University of Delaware (302) 831-6082 Systems Programmer III, Network and Sy...
2008 Feb 05
2
[LLVMdev] Advice on implementing fast per-thread data
...r to where the page is to be mapped, and just map it in the same place in every thread. Another possibility, and I'm not sure how to do this in LLVM, would be to sacrifice a register to hold the pointer to the unique per-thread structure. This would be worthwhile to me even on the register-starved x86-32. I suppose I could also just add a "hidden" (compiler-added and -maintained) argument to every function which is the pointer to the per-thread data. Using the normal thread-local storage scares me, because I don't know the performance implications. Obviously calling a...
2009 Jul 27
3
I/O load distribution
...disk I/O at the same time. One example would be the "updatedb" cronjob of the mlocate package. If you have say 5 VMs running on a physical System with a local software raid-1 as storage and the all run updatedb at the same time that causes all of them to run really slowly because the starve each other fighting over the disk. What is the best way to soften the impact of such a situation? Does it make sense to use a hardware raid instead? How would the raid type affect the performance in this case? Would the fact that the I/O load gets distributed across multiple spindles in, say,...