Displaying 20 results from an estimated 100 matches similar to: "[PATCH] x86/S3: Restore broken vcpu affinity on resume (v3)"
2013 Sep 17
1
[PATCH v2] xen: sched_credit: filter node-affinity mask against online cpus
in _csched_cpu_pick(), as not doing so may result in the domain''s
node-affinity mask (as retrieved by csched_balance_cpumask() )
and online mask (as retrieved by cpupool_scheduler_cpumask() )
having an empty intersection.
Therefore, when attempting a node-affinity load balancing step
and running this:
...
/* Pick an online CPU from the proper affinity mask */
2010 Aug 09
2
[PATCH 0 of 2] Scheduler: Implement yield for credit scheduler
As discussed in a previous e-mail, this patch series implements yield
for the credit scheduler. This allows a VM to actually yield (give up
the cpu to another VM) when it wants to. This has been shown to be
effective when used in the spinlock code to avoid wasting time
spinning when another vcpu is not currently scheduled.
_______________________________________________
Xen-devel mailing list
2013 Jul 28
8
powerdown problem on XEN
Hi guys,
I have a problem with powering down my system under the XEN hypervisor.
System details are as follows:
gentoo linux, X86_64
XEN version 4.2.2
linux hardened kernel 3.9.5 as dom0
Xeon E3 1260L processor (vt-d capable)
32GB ECC RAM which has been thoroughly tested - so should be o.k.
when I issue "shutdown -h now" from dom0 the system usually reboots
instead of turning off
2013 May 01
0
[xen-unstable test] 17860: regressions - FAIL
flight 17860 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/17860/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-pvops 4 kernel-build fail REGR. vs. 17854
Tests which did not succeed, but are not blocking:
test-amd64-amd64-xl-pcipt-intel 1 xen-build-check(1)
2014 Feb 26
2
OT: Howto to capture taskset output command
Hi all,
I am trying to set processor affinity for a specific process using a
shell script without result. Script:
#!/bin/sh -x
cpu_affinity_ok="2"
cpu_affinity="taskset -p -c `cat /tmp/test.pid` | awk '{print $6}'"
if [ -f /tmp/test.pid ]; then
if [ "$cpu_affinity" == "$cpu_affinity_ok" ]; then
exit 0
else
taskset -p -c 2
2007 Jun 27
1
[PATCH 7/10] SMP support to Xen PM
Add SMP support to Xen host S3
Signed-off-by Kevin Tian <kevin.tian@intel.com>
diff -r 1539f5a2b3ba xen/arch/x86/acpi/power.c
--- a/xen/arch/x86/acpi/power.c Tue Jun 26 18:05:22 2007 -0400
+++ b/xen/arch/x86/acpi/power.c Tue Jun 26 19:44:36 2007 -0400
@@ -25,6 +25,7 @@
#include <xen/sched.h>
#include <xen/domain.h>
#include <xen/console.h>
+#include
2008 Aug 06
3
[PATCH RFC] do_settime is backwards?!
While digging through the time code, I found something very strange
in do_settime:
x = (secs * 1000000000ULL) + (u64)nsecs - system_time_base;
y = do_div(x, 1000000000);
spin_lock(&wc_lock);
wc_sec = _wc_sec = (u32)x;
wc_nsec = _wc_nsec = (u32)y;
spin_unlock(&wc_lock);
The value "x" appears to be the number of nanoseconds, while
the value
2013 Mar 15
2
strange phenomenon on CPU affinity
Hello,
My testing machine has 2 quad-core CPU (It supports hyperthreading,
but i disable it in BIOS). I uses Xen 4.0.1 as the hypervisor. When I use 8
VMs to conduct a test, CPU affinity of the VMs is very strange. Like this:
vm_name vcpu_num cpu_affinity
Domain-0 8 any
VM1 4 1,3,5,7
VM2 4 1,3,5,7
VM3 4 1,3,5,7
VM4 4
2012 Oct 17
28
Xen PVM: Strange lockups when running PostgreSQL load
I am currently looking at a bug report[1] which is happening when
a Xen PVM guest with multiple VCPUs is running a high IO database
load (a test script is available in the bug report).
In experimenting it seems that this happens (or is getting more
likely) when the number of VCPUs is 8 or higher (though I have
not tried 6, only 2 and 4), having autogroup enabled seems to
make it more likely, too
2012 Sep 18
6
[PATCH 2/5] Xen/MCE: vMCE injection
Xen/MCE: vMCE injection
In our test for win8 guest mce, we find a bug that no matter what SRAO/SRAR
error xen inject to win8 guest, it always reboot.
The root cause is, current Xen vMCE logic inject vMCE# only to vcpu0, this is
not correct for Intel MCE (Under Intel arch, h/w generate MCE# to all CPUs).
This patch fix vMCE injection bug, injecting vMCE# to all vcpus.
Signed-off-by: Liu,
2012 Dec 12
2
[PATCH v7 1/2] xen: unify domain locking in domctl code
These two patches were originally part of the XSM series that I have
posted, and remain prerequisites for that series. However, they are
independent of the XSM changes and are a useful simplification
regardless of the use of XSM.
The Acked-bys on these patches were provided before rebasing them over
the copyback changes in 26268:1b72138bddda, which had minor conflicts
that I resolved.
[PATCH
2007 Aug 30
0
[PATCH][Retry 1] 1/4: cpufreq/PowerNow! in Xen: Xen timer changes
Enable cpufreq support in Xen for AMD Operton processors by:
1) Allowing the PowerNow! driver in dom0 to write to the PowerNow!
MSRs.
2) Adding the cpufreq notifier chain to time-xen.c in dom0.
On a frequency change, a platform hypercall is performed to
scale the frequency multiplier in the hypervisor.
3) Adding a platform hypercall to the hypervisor the scale
the frequency multiplier and reset
2009 Apr 06
5
Config to set CPU affinity and distribute interrupts
Hi,
I have some problems to configure the xen I''ve installed (3.3.1). The computer is a Intel Core 2 Duo, I''m using Ubuntu 8.10 and have linux in my dom0 and winxp pro in my domU.
I have two cores and I''d like to set the affinity of dom0 to cpu0 and domU to cpu1 but I haven''t find the way of making this permanent. I''ve set cpus=1 in the domU config
2012 Dec 03
17
[PATCH 0 of 3] xen: sched_credit: fix tickling and add some tracing
Hello,
This small series deals with some weirdness in the mechanism with which the
credit scheduler choses what PCPU to tickle upon a VCPU wake-up. Details are
available in the changelog of the first patch.
The new approach has been extensively benchmarked and proved itself either
beneficial or harmless. That means it does not introduce any significant amount
of overhead and/or performances
2008 Sep 05
0
3.2.1+ HVM + HAP + NUMA - Poor Memory Performance
Hi Everyone,
I am running 3.2.1 on Centos 5.2 with HAP enabled, NUMA enabled, ACPI
enabled and the dom0 allocated 512Mb. I have setup a single core 1Gb VM
for performance testing under Windows 2008 Server. Most CPU results are
within a few percent of theoretical max but Memory performance is about
half what I expected.
I get 3.22Gb/Sec Sandra 2009 Memory performance for a single Opteron
8350
2008 Mar 19
0
RE: [Xen-ia64-devel] New error trying to create a domain(usinglatestxend-unstable
Hi Keir,
The CS# 17131 which I write for bind guest to NUMA node via cpu affinity
missed one condition existing in some machines, where there aren''t any
cpus but only memories. Under this condition it will fail to set
cpu_affinity because of none parameter. I cope with this condition in
the new patch and make a little change of the methods to find suitable
node to bind guest. When
2011 Dec 20
0
sedf: remove useless tracing printk and harmonize comments style.
sched_sedf.c used o have its own mechanism for producing tracing-alike
kind of information (domain block, wakeup, etc.). Nowadays, with an even
not so high number of pCPUs/vCPUs, just trying to enable this makes
the serial console completely unusable, produces tons of very hard to
parse and interpreet logging and can easily livelock Dom0. Moreover,
pretty much the same result this is struggling to
2024 Jul 23
1
NSD 4.10.1rc2 pre-release
Hi,
NSD 4.10.1rc2 pre-release is available:
https://nlnetlabs.nl/downloads/nsd/nsd-4.10.1rc2.tar.gz
sha256 ce2e82bc673aeff3a71aeb422fa38fb8db0a591edb76c13b0e4dde83ec8253e9
pgp https://nlnetlabs.nl/downloads/nsd/nsd-4.10.1rc2.tar.gz.asc
Version 4.10.1 consists primarily of bug fixes.
@bilias implemented mutual TLS authentication for zone transfers.
Please consult the nsd.conf manual for details
2013 Mar 13
67
High CPU temp, suspend problem - xen 4.1.5-pre, linux 3.7.x
Hi,
I''ve still have problems with ACPI(?) on Xen. After some system startup or
resume CPU temperature goes high although all domUs (and dom0) are idle. On
"good" system startup it is about 50-55C, on "bad" - above 67C (most time
above 70C). I''ve noticed difference in C-states repored by Xen (attached
files). On "bad" startups in addition suspend
2011 Nov 08
48
Need help with fixing the Xen waitqueue feature
The patch ''mem_event: use wait queue when ring is full'' I just sent out
makes use of the waitqueue feature. There are two issues I get with the
change applied:
I think I got the logic right, and in my testing vcpu->pause_count drops
to zero in p2m_mem_paging_resume(). But for some reason the vcpu does
not make progress after the first wakeup. In my debugging there is one