Displaying 20 results from an estimated 28 matches for "runqueue".
2008 Dec 08
10
[PATCH] Accurate vcpu weighting for credit scheduler
Hi,
This patch intends to accurate vcpu weighting
for CPU intensive job.
The reason of this problem is that
vcpu round-robin queue blocks large weight vcpus
by small weight vcpus.
For example, we assume following case on 2pcpu environment.
(with 4domains (each domain has 2vcpus))
dom1 vcpu0,1 w128 credit 4
dom2 vcpu0,1 w128 credit 4
dom3 vcpu0,1 w256 credit 8
dom4 vcpu0,1 w512 credit 15
2005 May 17
8
scheduler independent forced vcpu selection
...ing to figure out how to get
the scheduler (I''ve only played with bvt) to run the vcpu passed in the
hypercall (after some validation) but I''ve run into various bad state
situations (do_softirq pending != 0 assert, ''!active_ac_timer(timer)''
failed , and __task_on_runqueue(prev) failed) which tells me I
don''t fully understand all of the book-keeping that is needed. Has
anyone thought about how to do this with either BVT or the new EDF
scheduler?
--
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253 T/L: 678-9253...
2010 Oct 26
3
[PATCH 0 of 3] credit2 updates
Address some credit2 issues. This patch series, along with the recent
changes to the cpupools interface, should address some of the strange
credit2 instability.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2008 Sep 09
29
[PATCH 1/4] CPU online/offline support in Xen
This patch implements cpu offline feature.
Best Regards
Haitao Shan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2007 Mar 01
2
SMP on a HP DL-320 G2
Hi gang!
At work I have a DL320 G2 machine I use as my desktop (I know, weird!...).
Back when I ran RHEL WS 2.1 on it, it always ran a SMP kernel because it
has a HypterThread capable processor.
When I installed (fresh from scratch) Centos 4.4 on it a while back, though,
Centos installed only the UP kernel.
I've looked in the BIOS for settings to enable/disable HT support and
I don't
2002 Oct 03
1
kjournald tuning
...reached, kjournald shows D
state for several seconds. During that same time, block-out "bo" in vmstat
consistently shows 0. kjournald is choking. Or something. Also, this appears
to get a lot of processes looping on write somewhere in the webserver.
Looping on the lockfile, maybe? The runqueue gets huge > 40 often , and
apache spirals up to all 75 processes in use. These pauses seem to occur at
pretty exact intervals. Something like every 240 seconds. With the disk in
question mounted as ext2 I don't get the issue. At all.
The disk is IDE, but fairly fast, and we're using...
2008 Jun 19
15
Power aware credit scheduler
...better
power saving ability with negligible performance impact, following
areas may be tweaked and listed here for comments first.
Goal is not to silly save power with sacrifice of performance, e.g.
we don''t want to prevent migration when there''re free cpus with
some pending runqueues. But when free computing power is more
than existing requirement, power aware policy can be pushed to
choose a less power-intrusive decision. Of course even in latter
case, it''s controllable with a scheduler parameter like
csched_private.power and exposed to user.
----
a) when there...
2010 Aug 12
1
Question regarding Hypervisor_sched_op function
...question regarding this -
1. I understand that a guest VM can use the above function with "yield", in
order to relinquish CPU time to other guests with running tasks.
a) So when the guest VM makes this call does it save/persist any scheduling
information about its current processes like runqueue, process state -
runnable, ready etc ? If it does can someone point to the code I can look
at.
c) When yielded guest VM is scheduled back by the hypervisor, does it uses
the previous state of the processes which were yielded ?
2. Is there a way in which the Hypervisor_sched_op call can be made fr...
2010 Aug 12
1
Question regarding Hypervisor_sched_op function
...question regarding this -
1. I understand that a guest VM can use the above function with "yield", in
order to relinquish CPU time to other guests with running tasks.
a) So when the guest VM makes this call does it save/persist any scheduling
information about its current processes like runqueue, process state -
runnable, ready etc ? If it does can someone point to the code I can look
at.
c) When yielded guest VM is scheduled back by the hypervisor, does it uses
the previous state of the processes which were yielded ?
2. Is there a way in which the Hypervisor_sched_op call can be made fr...
2010 Aug 09
2
[PATCH 0 of 2] Scheduler: Implement yield for credit scheduler
As discussed in a previous e-mail, this patch series implements yield
for the credit scheduler. This allows a VM to actually yield (give up
the cpu to another VM) when it wants to. This has been shown to be
effective when used in the spinlock code to avoid wasting time
spinning when another vcpu is not currently scheduled.
_______________________________________________
Xen-devel mailing list
2013 Mar 08
2
[PATCH v2 1/2] credit2: Fix erronous ASSERT
...+++ b/xen/common/sched_credit2.c
@@ -1544,31 +1544,24 @@ csched_runtime(const struct scheduler *ops, int cpu, struct csched_vcpu *snext)
}
}
- /*
- * snext is about to be scheduled; so:
- *
- * 1. if snext->credit were less than 0 when it was taken off the
- * runqueue, then csched_schedule() should have called
- * reset_credit(). So at this point snext->credit must be greater
- * than 0.
- *
- * 2. snext''s credit must be greater than or equal to anyone else
- * in the queue, so snext->credit - swait->credit must be greater...
2005 Jul 12
21
Dom0 crashing on x86_64
I am seeing a problem with Dom0 crashing on x86_64 whenever I create a
DomU. I''ve done some more testing, and it appears that this problem is
somehow related to networking. Dom0 crashes as soon as the networking
services are started when DomU is coming up. As an experiment, I
brought up DomU without networking, and it stayed up. As soon as I
started DomU with networking enabled, however,
2006 Aug 26
2
3.8 update/x86_64 kernel?
After updating a very old x86_64 3.x install to 3.8 it still
has kernel-2.4.21-27.0.2.EL. Then if I repeat the
yum update command, it offers to install kernel 2.4.21-47.EL.ia32e.
Neither of these situations seems quite right.
--
Les Mikesell
lesmikesell at gmail.com
2012 Oct 17
28
Xen PVM: Strange lockups when running PostgreSQL load
...en as a host and with 3.2 and 3.5 kernels as guests.
The pv spinlock assumption I will try to get re-verified by asking
to reproduce under a real load with a kernel that just disables
that. However, the dumps I am looking at really do look odd there.
The first dump I looked at had the spinlock of runqueue[0] being
placed into the per-cpu lock_spinners variable for cpu#0 and
cpu#7 (doing my tests with 8 VCPUs). So apparently both cpus were
waiting on the slow path for it to become free. Though actually
it was free! Now, here is one issue I have in understanding the
dump: the back traces produced in c...
2009 Feb 18
2
New Project Proposal - Please provide comments
Hello all,
We are working on a new project idea for xen platform. I have summarized as
well as given detailed description of what we are trying to achieve. I
request your comments on the feasibility of this project. Also I have
pointed some relevant works which I feel is primitive considering the goal
we are trying to achieve. I request to point me to relevant links if there
are any other works
2006 Mar 13
1
Cannot load wcfxo -- Please help!
....... host bus clock speed is 200.4673 MHz.
cpu: 0, clocks: 2004673, slice: 668224
CPU0<T0:2004672,T1:1336448,D:0,S:668224,C:2004673>
cpu: 1, clocks: 2004673, slice: 668224
CPU1<T0:2004672,T1:668224,D:0,S:668224,C:2004673>
cpu_sibling_map[0] = 1
cpu_sibling_map[1] = 0
mapping CPU#0's runqueue to CPU#1's runqueue.
zapping low mappings.
Process timing init...done.
Starting migration thread for cpu 0
Starting migration thread for cpu 1
PCI: PCI BIOS revision 2.10 entry at 0xfb010, last bus=1
PCI: Using configuration type 1
PCI: Probing PCI hardware
PCI: Ignoring BAR0-3 of IDE controlle...
2020 Nov 03
0
[patch V3 24/37] sched: highmem: Store local kmaps in task struct
...ly(current->kmap_ctrl.idx))
+ __kmap_local_sched_out();
+#endif
+}
+
+static inline void kmap_local_sched_in(void)
+{
+#ifdef CONFIG_KMAP_LOCAL
+ if (unlikely(current->kmap_ctrl.idx))
+ __kmap_local_sched_in();
+#endif
+}
+
/**
* prepare_task_switch - prepare to switch tasks
* @rq: the runqueue preparing to switch
@@ -4075,6 +4091,7 @@ prepare_task_switch(struct rq *rq, struc
perf_event_task_sched_out(prev, next);
rseq_preempt(prev);
fire_sched_out_preempt_notifiers(prev, next);
+ kmap_local_sched_out();
prepare_task(next);
prepare_arch_switch(next);
}
@@ -4141,6 +4158,7 @@ sta...
2011 Dec 20
0
sedf: remove useless tracing printk and harmonize comments style.
...t blocked, dieing,...). The first element
+ * on this list is running on the processor, if the list is empty the idle
+ * task will run. As we are implementing EDF, this list is sorted by deadlines.
*/
DOMAIN_COMPARER(runq, list, d1->deadl_abs, d2->deadl_abs);
static inline void __add_to_runqueue_sort(struct vcpu *v)
{
- PRINT(3,"Adding domain %i.%i (deadl= %"PRIu64") to runq\n",
- v->domain->domain_id, v->vcpu_id, EDOM_INFO(v)->deadl_abs);
list_insert_sort(RUNQ(v->processor), LIST(v), runq_comp);
}
@@ -445,22 +401,21 @@ static int sed...
2008 Jun 16
8
Vcpu allocation for a newly created domU
Hi all,
I am having confusion regarding the way a newly created domain is
allocated vcpu.
Initially during dom0 creation alloc_vcpu is called to create vcpu
structs for all the available cpu''s and assigned to dom0. But its not
the case for domU creation.
1. So how will dom0 relinquish/share vcpu to/with a newly created domU.
Does this happen as part of the shared_info page mapping??
2025 Jan 22
5
[PATCH] drm/sched: Use struct for drm_sched_init() params
..._run_job_work(struct work_struct *w)
* drm_sched_init - Init a gpu scheduler instance
*
* @sched: scheduler instance
- * @ops: backend operations for this scheduler
- * @submit_wq: workqueue to use for submission. If NULL, an ordered wq is
- * allocated and used
- * @num_rqs: number of runqueues, one for each priority, up to DRM_SCHED_PRIORITY_COUNT
- * @credit_limit: the number of credits this scheduler can hold from all jobs
- * @hang_limit: number of times to allow a job to hang before dropping it
- * @timeout: timeout value in jiffies for the scheduler
- * @timeout_wq: workqueue to us...