search for: starving

Displaying 20 results from an estimated 354 matches for "starving".

Did you mean: starting
2007 Apr 18
1
[PATCH] Lguest launcher, child starving parent
Glauber noticed long delays between hitting a key, and seeing data come up on the virtual console. Looking into this, I found that the wake_parent routine that reads from all devices was actually starving out the parent after sending the parent a signal to wake up. The thing is, the child which takes the console input is recognized by the scheduler as an interactive process. The parent, doesn't do so much, so it is recognized more as a CPU hog. So the child easily gets a higher priority than t...
2007 Apr 18
1
[PATCH] Lguest launcher, child starving parent
Glauber noticed long delays between hitting a key, and seeing data come up on the virtual console. Looking into this, I found that the wake_parent routine that reads from all devices was actually starving out the parent after sending the parent a signal to wake up. The thing is, the child which takes the console input is recognized by the scheduler as an interactive process. The parent, doesn't do so much, so it is recognized more as a CPU hog. So the child easily gets a higher priority than t...
2008 May 21
0
DRDB Oddness
...etected. cat /proc/drbd version: 8.0.6 (api:86/proto:86) SVN Revision: 3048 build by phil@mescal, 2007-09-03 10:39:27 0: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r--- ns:67460 nr:42100 dw:109560 dr:276 al:8 bm:13 lo:0 pe:0 ua:0 ap:0 resync: used:0/31 hits:79 misses:13 starving:0 dirty:0 changed:13 act_log: used:0/257 hits:14489 misses:8 starving:0 dirty:0 changed:8 1: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r--- ns:0 nr:349880 dw:349880 dr:0 al:0 bm:7 lo:0 pe:0 ua:0 ap:0 resync: used:0/31 hits:67 misses:7 starving:0 dirty:0 changed:7...
2007 Apr 06
1
The best way to protect against starvation?
Hello, If an ordinary user runs: -- snip -- cat > starv.c <<EOF main(){ char *point; while(1) { point = ( char * ) malloc(10000); }} EOF cc starv.c while true do ./a.out & done -- snip -- This will fast starv the operating system (FreeBSD 6.2). I have tried to limit the number of processes and the amount of memmory consumed (in login.conf). There is also a file /etc/malloc.conf
2023 Feb 23
1
[nbdkit PATCH] server: Don't assert on send if client hangs up early
libnbd's copy/copy-nbd-error.sh was triggering an assertion failure in nbdkit: $ nbdcopy -- [ nbdkit --exit-with-parent -v --filter=error pattern 5M error-pread-rate=0.5 ] null: ... nbdkit: pattern.2: debug: error-inject: pread count=262144 offset=4718592 nbdkit: pattern.2: debug: pattern: pread count=262144 offset=4718592 nbdkit: pattern.1: debug: error-inject: pread count=262144
2007 Jun 15
0
sangoma WAN boards with lartc
...not know what the "priomap" option is > used for) > > >My questions are: > > > >- What if anything is missing/requiring change in my config given the > stated > >requirements? > > Your config does not prevent an higher priority class from starving > a lower priority class. Exactly. That is requirement. > You can prevent it in two different ways (at > least): Don''t want to prevent it right now. > > 1) You can assign a TBF qdisc (Token Bucket) to the PRIO classes > TBF: http://www.lartc.org/howto/lartc.qdisc....
2007 Jun 14
16
PQ questions
Hi all, First, let me say I''ve been most impressed with how quickly and professionally people on this list ask and answer questions. Next, let me say that with which I need help is properly configuring strict PQ, and gathering certain stats. Specifically: - I need to create a priority queue with four queues (let''s say they are of high, medium, normal, and low priority) - I
2013 Aug 30
17
[PATCH] rwsem: add rwsem_is_contended
Btrfs uses an rwsem to control access to its extent tree. Threads will hold a read lock on this rwsem while they scan the extent tree, and if need_resched() they will drop the lock and schedule. The transaction commit needs to take a write lock for this rwsem for a very short period to switch out the commit roots. If there are a lot of threads doing this caching operation we can starve out the
2005 May 09
3
how to guarantee 1/numflows bandwidth to each flow dynamically
I am looking for a simple way to guarantee to each flow going through my traffic control point 1/numflows of bandwidth. I thought using SFQ would do this effectively but it appears to be quite unfair: a very high speed download that fills the pipe easily starves smaller flows to the point where it becomes unusable (especially if they are at all interactive) Because numflows is dynamic,
2007 May 10
6
PRIO and TBF is much better than HTB??
Hello mailing list, i stand bevor a mystery and cannot explain it J. I want to do shaping and prioritization and I have done these following configurations and simulations. I canĀ“t explain, that the combination of PRIO and TBF is much better than the HTB (with the prio parameter) alone or in combination with the SFQ. Here are my example configurations: 2 Traffic Classes http (80 = 0x50) and
2005 Oct 21
2
Ogg Vorbis bitrate peeling bounty on Launchpad
...rate peeling added to Vorbis. It is a feature that I think we would all like to have, and would probably pay something to get, but it hasn't been done. My request to you is to add to the bounty. I have seeded it with US$20, which is not enough to motivate a developer to get it done, but I am a starving student with very little spare cash! If just a few from on here added $10 or something then this feature may finally be implemented and be more than just a 'possibility'. https://launchpad.net/bounties/ogg-vorbis-bitrate-peeling Regards, Aaron -- http://www.whitehouse.org.nz
2018 Apr 24
2
[PATCH] vhost_net: use packet weight for rx handler, too
...100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -46,8 +46,10 @@ MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;" #define VHOST_NET_WEIGHT 0x80000 /* Max number of packets transferred before requeueing the job. - * Using this limit prevents one virtqueue from starving rx. */ -#define VHOST_NET_PKT_WEIGHT(vq) ((vq)->num * 2) + * Using this limit prevents one virtqueue from starving others with small + * pkts. + */ +#define VHOST_NET_PKT_WEIGHT 256 /* MAX number of TX used buffers for outstanding zerocopy */ #define VHOST_MAX_PEND 128 @@ -587,7 +589,7 @@ st...
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
...we have PLE and lock holder isn't running [1] - the hypervisor randomly preempts us 3) Lock holder unlocks while pending VCPU is waiting in queue. 4) Subsequent lockers will see free lock with set pending bit and will loop in trylock's 'for (;;)' - the worst-case is lock starving [2] - PLE can save us from wasting whole timeslice Retry threshold is the easiest solution, regardless of its ugliness [4]. Another minor design flaw is that formerly first VCPU gets appended to the tail when it decides to queue; is the performance gain worth it? Thanks. --- 1: Pause Loop...
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
...we have PLE and lock holder isn't running [1] - the hypervisor randomly preempts us 3) Lock holder unlocks while pending VCPU is waiting in queue. 4) Subsequent lockers will see free lock with set pending bit and will loop in trylock's 'for (;;)' - the worst-case is lock starving [2] - PLE can save us from wasting whole timeslice Retry threshold is the easiest solution, regardless of its ugliness [4]. Another minor design flaw is that formerly first VCPU gets appended to the tail when it decides to queue; is the performance gain worth it? Thanks. --- 1: Pause Loop...
2004 May 08
2
PRIO qdisc with HTB
Hi, I''m trying to use prio qdisc with htb, however not the "usual" way (like for example FairNAT). Here is my idea: Root has HTB shaping traffic to link speed -> then goes PRIO queues -> each prio queue has HTB with sublasses for each user, should look like this: 1: htb qdisc | 1:1 htb class
2018 Feb 26
2
tinc 1.1: missing PONG
On Mon, 26 Feb 2018 23:01:29 +0100, Guus Sliepen wrote: > The problem is not the order of the events, the problem is that in the > Windows version of the event loop, we only handle one event in each loop > iteration. The select() loop handles all events that have accumulated so > far, so regardless of the order it handles them, it never starves fd. At > least, that was what I
2019 May 17
0
[PATCH V2 1/4] vhost: introduce vhost_exceeds_weight()
...scsi.c index 618fb64..d830579 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -57,6 +57,12 @@ #define VHOST_SCSI_PREALLOC_UPAGES 2048 #define VHOST_SCSI_PREALLOC_PROT_SGLS 2048 +/* Max number of requests before requeueing the job. + * Using this limit prevents one virtqueue from starving others with + * request. + */ +#define VHOST_SCSI_WEIGHT 256 + struct vhost_scsi_inflight { /* Wait for the flush operation to finish */ struct completion comp; @@ -1622,7 +1628,8 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) vqs[i] = &vs->vqs[i].vq; vs->vq...
2001 Nov 02
7
Entropy and DSA keys
I remember a discussion to the effect that using DSA keys in sshd increases the requirement for random bits available on the system... and that this requirement (was it a 128 bit random number per connection?) presents security problems on systems that don't have a decent source of entropy? Am I misinterpreting those discussions? We are having a problem deploying sshd (no prngd) where sshd
2008 Feb 05
2
[LLVMdev] Advice on implementing fast per-thread data
Hello- I'm looking to implement a new programming language using LLVM as a back-end. Generally it's looking very good, I only have one question. The language is going to be an ML-style language, similiar to Haskell or Ocaml, except explicitly multithreaded and (like Haskell but unlike Ocaml) purely functional. But this means that speed of allocation is essential- purely functional
2009 Jul 27
3
I/O load distribution
Hi, What is the best way to deal with I/O load when running several VMs on a physical machine with local or remote storage? What I'm primarily worried about is the case when several VMs cause disk I/O at the same time. One example would be the "updatedb" cronjob of the mlocate package. If you have say 5 VMs running on a physical System with a local software raid-1 as storage and