search for: timeslices

Displaying 20 results from an estimated 99 matches for "timeslices".

Did you mean: timeslice
2013 Nov 13
3
[Patch] credit: Update other parameters when setting tslice_ms
From: Nate Studer <nate.studer@dornerworks.com> Add a utility function to update the rest of the timeslice accounting fields when updating the timeslice of the credit scheduler, so that capped CPUs behave correctly. Before this patch changing the timeslice to a value higher than the default would result in a domain not utilizing its full capacity and changing the timeslice to a value lower
2011 Sep 01
4
[PATCH] xen,credit1: Add variable timeslice
Add a xen command-line parameter, sched_credit_tslice_ms, to set the timeslice of the credit1 scheduler. Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com> diff -r 4a4882df5649 -r 782284c5b1bc xen/common/sched_credit.c --- a/xen/common/sched_credit.c Wed Aug 31 15:23:49 2011 +0100 +++ b/xen/common/sched_credit.c Thu Sep 01 16:29:50 2011 +0100 @@ -41,15 +41,9 @@ */ #define
2006 Mar 02
5
Milliwatt Analyzer available
....666666667 Hz, 500 Hz, ... Furthermore the application computes the ripple on that tone. In order to detect audiogaps and short noise on the line, one can define a treshold and a timeslice duration (typically 1s to 0.1s), and the application will compute the ripple for each timeslice and count the timeslices with a ripple greater than the given treshold. Thus the application is a tool to verify the line quality, e.g. for least-cost-but-not-too-bad-line routings. For conveniance Mwanalyze also generates a tone of the frequency it analyzes. Thus a bidirectional operation, and test for frequencies other...
2005 Jan 05
3
Sharing/splitting bandwidth on a link while bandwidth of the link is variable (or unknown) ?
Hello, I want to share/split bandwidth on a link with unknown bandwidth. I want to exactly share/split bandwidth (for example : FTP 30% , HTTP 20% or 30% for a group of PCs and so forth.) "Traffic-Control-HOWTO" talk that PRIO scheduler is an ideal match for "Handling a link with a variable (or unknown) bandwidth". But PRIO scheduler can not exactly share/split
2016 Nov 15
2
[GIT PULL v2 1/5] processor.h: introduce cpu_relax_yield
On Tue, Oct 25, 2016 at 11:03:11AM +0200, Christian Borntraeger wrote: > For spinning loops people do often use barrier() or cpu_relax(). > For most architectures cpu_relax and barrier are the same, but on > some architectures cpu_relax can add some latency. > For example on power,sparc64 and arc, cpu_relax can shift the CPU > towards other hardware threads in an SMT environment.
2016 Nov 15
2
[GIT PULL v2 1/5] processor.h: introduce cpu_relax_yield
On Tue, Oct 25, 2016 at 11:03:11AM +0200, Christian Borntraeger wrote: > For spinning loops people do often use barrier() or cpu_relax(). > For most architectures cpu_relax and barrier are the same, but on > some architectures cpu_relax can add some latency. > For example on power,sparc64 and arc, cpu_relax can shift the CPU > towards other hardware threads in an SMT environment.
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 11:01-0400, Waiman Long: > From: Peter Zijlstra <peterz at infradead.org> > > Because the qspinlock needs to touch a second cacheline; add a pending > bit and allow a single in-word spinner before we punt to the second > cacheline. I think there is an unwanted scenario on virtual machines: 1) VCPU sets the pending bit and start spinning. 2) Pending VCPU gets
2014 May 12
3
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-07 11:01-0400, Waiman Long: > From: Peter Zijlstra <peterz at infradead.org> > > Because the qspinlock needs to touch a second cacheline; add a pending > bit and allow a single in-word spinner before we punt to the second > cacheline. I think there is an unwanted scenario on virtual machines: 1) VCPU sets the pending bit and start spinning. 2) Pending VCPU gets
2005 Mar 07
2
high CPU load for large # sources?
Hi all, I have an icecast setup with 20+ sources. During peak times some 20 sources will be connected with a total of some 250 listeners more-or-less equally divided over the 20 sources. All streams are running at a measly 16 kbps. There is enough bandwidth to/from the server. During these peak times I see very high CPU usage for icecast 98-99%. The system I'm running is an Intel Celeron
2011 Sep 02
1
determine the latency characteristics of a VM automatically
Hi George, Tow months ago, we talked about how to reduce the scheduling latency for a specific VM which runs a mixed workload, where the boost mechanism can not works well. I have tried some methods to reduce the scheduling latency for some assumed latency-sensitive VMs and got some progress on it. Now I hope to make it on demand. That is to say, I hope to get the scheduler to determine the
2011 Jul 07
1
Select element out of several ncdf variables
Hi there I'm working with ncdf data. I have different dataset for 4 runs, 4 seasons and 3 timeslices (48 datasets in total). The datasets have the following dimensions: 96 longitudes, 48 latitudes and 30 time steps. To read all of them in, I wrote the following loop: runs <- c("03","04","05","06") years <- c(1851,1961,2061) seasons <- c("DJF&q...
2016 Nov 15
1
[GIT PULL v2 1/5] processor.h: introduce cpu_relax_yield
On Tue, Nov 15, 2016 at 02:19:53PM +0100, Christian Borntraeger wrote: > On 11/15/2016 01:30 PM, Russell King - ARM Linux wrote: > > On Tue, Oct 25, 2016 at 11:03:11AM +0200, Christian Borntraeger wrote: > >> For spinning loops people do often use barrier() or cpu_relax(). > >> For most architectures cpu_relax and barrier are the same, but on > >> some
2016 Nov 15
1
[GIT PULL v2 1/5] processor.h: introduce cpu_relax_yield
On Tue, Nov 15, 2016 at 02:19:53PM +0100, Christian Borntraeger wrote: > On 11/15/2016 01:30 PM, Russell King - ARM Linux wrote: > > On Tue, Oct 25, 2016 at 11:03:11AM +0200, Christian Borntraeger wrote: > >> For spinning loops people do often use barrier() or cpu_relax(). > >> For most architectures cpu_relax and barrier are the same, but on > >> some
2007 Apr 18
0
[PATCH 2/6] Paravirt CPU hypercall batching mode
...de, flushing any hypercalls made here. + * This must be done before restoring TLS segments so + * the GDT and LDT are properly updated, and must be + * done before math_state_restore, so the TS bit is up + * to date. + */ + arch_leave_lazy_cpu_mode(); + + /* If the task has used fpu the last 5 timeslices, just do a full + * restore of the math state immediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +69...
2007 Apr 18
0
[PATCH 2/6] Paravirt CPU hypercall batching mode
...de, flushing any hypercalls made here. + * This must be done before restoring TLS segments so + * the GDT and LDT are properly updated, and must be + * done before math_state_restore, so the TS bit is up + * to date. + */ + arch_leave_lazy_cpu_mode(); + + /* If the task has used fpu the last 5 timeslices, just do a full + * restore of the math state immediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +69...
2007 Apr 18
2
[PATCH 2/5] Paravirt cpu batching.patch
...here. + * This must be done before restoring TLS segments so + * the GDT and LDT are properly updated, and must be + * done before math_state_restore, so the TS bit is up + * to date. + */ + arch_leave_lazy_cpu_mode(); + + disable_tsc(prev_p, next_p); + + /* If the task has used fpu the last 5 timeslices, just do a full + * restore of the math state immediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +70...
2007 Apr 18
2
[PATCH 2/5] Paravirt cpu batching.patch
...here. + * This must be done before restoring TLS segments so + * the GDT and LDT are properly updated, and must be + * done before math_state_restore, so the TS bit is up + * to date. + */ + arch_leave_lazy_cpu_mode(); + + disable_tsc(prev_p, next_p); + + /* If the task has used fpu the last 5 timeslices, just do a full + * restore of the math state immediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +70...
2007 Apr 18
2
Stolen and degraded time and schedulers
...acerbates this. If you have a busy machine running multiple virtual CPUs, then each VCPU may only get a small proportion of the total amount of available CPU time. If the kernel's scheduler asserts that "you were just scheduled for 1ms, therefore you made 1ms of progress", then many timeslices will effectively end up being 1ms of 0Mhz CPU - because the VCPU wasn't scheduled and the real CPU was doing something else. So how to deal with this? Basically we need a clock which measures "CPU work units", and have the scheduler use this clock. A "CPU work unit" cloc...
2007 Apr 18
2
Stolen and degraded time and schedulers
...acerbates this. If you have a busy machine running multiple virtual CPUs, then each VCPU may only get a small proportion of the total amount of available CPU time. If the kernel's scheduler asserts that "you were just scheduled for 1ms, therefore you made 1ms of progress", then many timeslices will effectively end up being 1ms of 0Mhz CPU - because the VCPU wasn't scheduled and the real CPU was doing something else. So how to deal with this? Basically we need a clock which measures "CPU work units", and have the scheduler use this clock. A "CPU work unit" cloc...
2006 Sep 04
7
Xeon 5160 vs 5080
Chip Clock HT Cache Bus Speed --------------------------------------------------------- 5080 3.7 GHz YES 2MB 1066 MHz 5160 3.0 GHz NO 4MB 1333 MHz Does the .7 GHz and HT worth more then 4MB cache and higher bus speed? The application is VoIP so there is not a lot of IO so I would not think Bus Speed would matter. I am finding mixed information on HT, some say it is great, others say it