search for: timeslice

Displaying 20 results from an estimated 99 matches for "timeslice".

2013 Nov 13
3
[Patch] credit: Update other parameters when setting tslice_ms
From: Nate Studer <nate.studer@dornerworks.com> Add a utility function to update the rest of the timeslice accounting fields when updating the timeslice of the credit scheduler, so that capped CPUs behave correctly. Before this patch changing the timeslice to a value higher than the default would result in a domain not utilizing its full capacity and changing the timeslice to a value lower than the def...
2011 Sep 01
4
[PATCH] xen,credit1: Add variable timeslice
Add a xen command-line parameter, sched_credit_tslice_ms, to set the timeslice of the credit1 scheduler. Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com> diff -r 4a4882df5649 -r 782284c5b1bc xen/common/sched_credit.c --- a/xen/common/sched_credit.c Wed Aug 31 15:23:49 2011 +0100 +++ b/xen/common/sched_credit.c Thu Sep 01 16:29:50 2011 +0100 @@ -41,15 +41,9 @...
2006 Mar 02
5
Milliwatt Analyzer available
...sed as argument. The periode duration must be a mulitple of 0.5 ms, thus the valid frequences are: 2000 Hz, 1000 Hz, 666.666666667 Hz, 500 Hz, ... Furthermore the application computes the ripple on that tone. In order to detect audiogaps and short noise on the line, one can define a treshold and a timeslice duration (typically 1s to 0.1s), and the application will compute the ripple for each timeslice and count the timeslices with a ripple greater than the given treshold. Thus the application is a tool to verify the line quality, e.g. for least-cost-but-not-too-bad-line routings. For conveniance Mwa...
2005 Jan 05
3
Sharing/splitting bandwidth on a link while bandwidth of the link is variable (or unknown) ?
Hello, I want to share/split bandwidth on a link with unknown bandwidth. I want to exactly share/split bandwidth (for example : FTP 30% , HTTP 20% or 30% for a group of PCs and so forth.) "Traffic-Control-HOWTO" talk that PRIO scheduler is an ideal match for "Handling a link with a variable (or unknown) bandwidth". But PRIO scheduler can not exactly share/split
2016 Nov 15
2
[GIT PULL v2 1/5] processor.h: introduce cpu_relax_yield
...he same, but on > some architectures cpu_relax can add some latency. > For example on power,sparc64 and arc, cpu_relax can shift the CPU > towards other hardware threads in an SMT environment. > On s390 cpu_relax does even more, it uses an hypercall to the > hypervisor to give up the timeslice. > In contrast to the SMT yielding this can result in larger latencies. > In some places this latency is unwanted, so another variant > "cpu_relax_lowlatency" was introduced. Before this is used in more > and more places, lets revert the logic and provide a cpu_relax_yield &gt...
2016 Nov 15
2
[GIT PULL v2 1/5] processor.h: introduce cpu_relax_yield
...he same, but on > some architectures cpu_relax can add some latency. > For example on power,sparc64 and arc, cpu_relax can shift the CPU > towards other hardware threads in an SMT environment. > On s390 cpu_relax does even more, it uses an hypercall to the > hypervisor to give up the timeslice. > In contrast to the SMT yielding this can result in larger latencies. > In some places this latency is unwanted, so another variant > "cpu_relax_lowlatency" was introduced. Before this is used in more > and more places, lets revert the logic and provide a cpu_relax_yield &gt...
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
...- the hypervisor randomly preempts us 3) Lock holder unlocks while pending VCPU is waiting in queue. 4) Subsequent lockers will see free lock with set pending bit and will loop in trylock's 'for (;;)' - the worst-case is lock starving [2] - PLE can save us from wasting whole timeslice Retry threshold is the easiest solution, regardless of its ugliness [4]. Another minor design flaw is that formerly first VCPU gets appended to the tail when it decides to queue; is the performance gain worth it? Thanks. --- 1: Pause Loop Exiting is almost certain to vmexit in that case: we...
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
...- the hypervisor randomly preempts us 3) Lock holder unlocks while pending VCPU is waiting in queue. 4) Subsequent lockers will see free lock with set pending bit and will loop in trylock's 'for (;;)' - the worst-case is lock starving [2] - PLE can save us from wasting whole timeslice Retry threshold is the easiest solution, regardless of its ugliness [4]. Another minor design flaw is that formerly first VCPU gets appended to the tail when it decides to queue; is the performance gain worth it? Thanks. --- 1: Pause Loop Exiting is almost certain to vmexit in that case: we...
2005 Mar 07
2
high CPU load for large # sources?
...erviced by a different thread? How many threads would I have then? My threadpool is set at 20. Could it be that the timeout on poll/select of 250 in each thread causes Icecast to effectively busywait since with 20 threads the actual time between the poll/select system calls is reduced to ~10ms, the timeslice of Linux! Here's the relevant part of my configuration file. <limits> <clients>2000</clients> <sources>100</sources> <threadpool>20</threadpool> <queue-size>512000</queue-size> <client-timeout&gt...
2011 Sep 02
1
determine the latency characteristics of a VM automatically
...most time latency-sensitive operations are initiated with an interrupt, so a pending interrupt generally means that there is a latency sensitive operation waiting to happen. I remember you said your idea was to have the scheduler look at the historical rate of interrupts and determine a preemption timeslice based on those. I know your general idea, but could you talk more about it? What''s more, I wonder if only the interrupts can infer the workload type? In my opinion, a pending interrupt indicates there is a operation to handle but may not be latency sensitive. Some common I/O operation, e.g...
2011 Jul 07
1
Select element out of several ncdf variables
Hi there I'm working with ncdf data. I have different dataset for 4 runs, 4 seasons and 3 timeslices (48 datasets in total). The datasets have the following dimensions: 96 longitudes, 48 latitudes and 30 time steps. To read all of them in, I wrote the following loop: runs <- c("03","04","05","06") years <- c(1851,1961,2061) seasons <- c("DJF&...
2016 Nov 15
1
[GIT PULL v2 1/5] processor.h: introduce cpu_relax_yield
...res cpu_relax can add some latency. > >> For example on power,sparc64 and arc, cpu_relax can shift the CPU > >> towards other hardware threads in an SMT environment. > >> On s390 cpu_relax does even more, it uses an hypercall to the > >> hypervisor to give up the timeslice. > >> In contrast to the SMT yielding this can result in larger latencies. > >> In some places this latency is unwanted, so another variant > >> "cpu_relax_lowlatency" was introduced. Before this is used in more > >> and more places, lets revert the log...
2016 Nov 15
1
[GIT PULL v2 1/5] processor.h: introduce cpu_relax_yield
...res cpu_relax can add some latency. > >> For example on power,sparc64 and arc, cpu_relax can shift the CPU > >> towards other hardware threads in an SMT environment. > >> On s390 cpu_relax does even more, it uses an hypercall to the > >> hypervisor to give up the timeslice. > >> In contrast to the SMT yielding this can result in larger latencies. > >> In some places this latency is unwanted, so another variant > >> "cpu_relax_lowlatency" was introduced. Before this is used in more > >> and more places, lets revert the log...
2007 Apr 18
0
[PATCH 2/6] Paravirt CPU hypercall batching mode
...de, flushing any hypercalls made here. + * This must be done before restoring TLS segments so + * the GDT and LDT are properly updated, and must be + * done before math_state_restore, so the TS bit is up + * to date. + */ + arch_leave_lazy_cpu_mode(); + + /* If the task has used fpu the last 5 timeslices, just do a full + * restore of the math state immediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +6...
2007 Apr 18
0
[PATCH 2/6] Paravirt CPU hypercall batching mode
...de, flushing any hypercalls made here. + * This must be done before restoring TLS segments so + * the GDT and LDT are properly updated, and must be + * done before math_state_restore, so the TS bit is up + * to date. + */ + arch_leave_lazy_cpu_mode(); + + /* If the task has used fpu the last 5 timeslices, just do a full + * restore of the math state immediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +6...
2007 Apr 18
2
[PATCH 2/5] Paravirt cpu batching.patch
...here. + * This must be done before restoring TLS segments so + * the GDT and LDT are properly updated, and must be + * done before math_state_restore, so the TS bit is up + * to date. + */ + arch_leave_lazy_cpu_mode(); + + disable_tsc(prev_p, next_p); + + /* If the task has used fpu the last 5 timeslices, just do a full + * restore of the math state immediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +7...
2007 Apr 18
2
[PATCH 2/5] Paravirt cpu batching.patch
...here. + * This must be done before restoring TLS segments so + * the GDT and LDT are properly updated, and must be + * done before math_state_restore, so the TS bit is up + * to date. + */ + arch_leave_lazy_cpu_mode(); + + disable_tsc(prev_p, next_p); + + /* If the task has used fpu the last 5 timeslices, just do a full + * restore of the math state immediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +7...
2007 Apr 18
2
Stolen and degraded time and schedulers
...acerbates this. If you have a busy machine running multiple virtual CPUs, then each VCPU may only get a small proportion of the total amount of available CPU time. If the kernel's scheduler asserts that "you were just scheduled for 1ms, therefore you made 1ms of progress", then many timeslices will effectively end up being 1ms of 0Mhz CPU - because the VCPU wasn't scheduled and the real CPU was doing something else. So how to deal with this? Basically we need a clock which measures "CPU work units", and have the scheduler use this clock. A "CPU work unit" clo...
2007 Apr 18
2
Stolen and degraded time and schedulers
...acerbates this. If you have a busy machine running multiple virtual CPUs, then each VCPU may only get a small proportion of the total amount of available CPU time. If the kernel's scheduler asserts that "you were just scheduled for 1ms, therefore you made 1ms of progress", then many timeslices will effectively end up being 1ms of 0Mhz CPU - because the VCPU wasn't scheduled and the real CPU was doing something else. So how to deal with this? Basically we need a clock which measures "CPU work units", and have the scheduler use this clock. A "CPU work unit" clo...
2006 Sep 04
7
Xeon 5160 vs 5080
Chip Clock HT Cache Bus Speed --------------------------------------------------------- 5080 3.7 GHz YES 2MB 1066 MHz 5160 3.0 GHz NO 4MB 1333 MHz Does the .7 GHz and HT worth more then 4MB cache and higher bus speed? The application is VoIP so there is not a lot of IO so I would not think Bus Speed would matter. I am finding mixed information on HT, some say it is great, others say it