Displaying 20 results from an estimated 20000 matches similar to: "RE: [PATCH] enable port accesses with (almost) fullregister context"
2007 Feb 13
7
Taken fault at bad CS c000...
Just saw such warnings like:
...
(XEN) printk: 387824 messages suppressed.
(XEN) seg_fixup.c:282: Taken fault at bad CS c000, IP 00003aab
(XEN) seg_fixup.c:282: Taken fault at bad CS c000, IP 00003ab2
(XEN) seg_fixup.c:282: Taken fault at bad CS c000, IP 00003aab
(XEN) seg_fixup.c:282: Taken fault at bad CS c000, IP 00003ab2
...
It only jumped out when switching to/off X-windows within dom0,
and
2008 Nov 27
1
Re: RE: Re: Re: when timer go back in dom0 save and restore ormigrate, PV domain hung
F.Y.I
>>> "Tian, Kevin" <kevin.tian@intel.com> 08.11.27. 11:50 >>>Sorry for a
typo. I did mean domU instead of dom0. :-) The point here is that
time_resume will sync to new system time and wall clock at restore, and
thus pv guest should be able to continue... Xen system time is not
wallclock time which just counts up from power up. As Keir points out,
only its
2009 Apr 16
9
Second release candidate for Xen 3.4.0
Folks,
The second release candidate for Xen 3.4.0 is available at
http://xenbits.xensource.com/xen-unstable.hg, tagged as ''3.4.0-rc2''.
Please test!
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2007 Oct 17
7
[VTD][RESEND]add a timer for the shared interrupt issue for vt-d
Keir,
It''s a resending patch for the timeout mechanism to deal with the shared
interrupt issue for vt-d enabled hvm guest.
We modify the patch following your comments last time and make some
other small fix:
1) We don''t touch the locking around the hvm_dpci_eoi().
2) Remove the HZ from the TIME_OUT_PERIOD macro which may confuse
others.
3) Add some
2008 Nov 24
10
[PATCH] Dom0 Kernel - Fixes for saving/restoring MSI/MSI-X across Dom0 S3
Hi, Keir,
This patch is a bugfix for saving and restoring MSI/MSI-X across S3. Currently, Dom0''s PCI layer unmaps MSI when S3 and maps them back when resuming. However, this triggers unexpected behaviors. For example, if the drivers still holds that irq at the point of unmapping MSI, Xen will force unbind that pirq. But after resume, we have no mechanism to rebind that pirq. The device
2007 Jan 30
45
[PATCH] Fix softlockup issue after vcpu hotplug
Stamp softlockup thread earlier before do_timer, because the
latter is the one to actually trigger lock warning for
long-time offline. Or else, I obserevd softlockup warning
easily at manual vcpu hot-remove/plug, or when suspend cancel
into old context.
One point here is to cover both stolen and blocked time to
compare with offline threshold. vcpu hotplug falls into ''stolen''
2007 Sep 30
6
[VTD][PATCH] a time out mechanism for the shared interrupt issue for vtd
Attached is a patch for shared interrupt between dom0 and HVM domain for
vtd.
Most of problem is caused by that we should inject interrupt to both
domains and the
physical interrupt deassertion then may be delayed by the device
assigned to the HVM.
The patch adds a timer, and the time out value is sufficient large to
tolerant
the delaying used to wait for the physical interrupt deassertion.
2008 Feb 03
5
[PATCH] Simplify paging_invlpg when flush is not required.
Simplify paging_invlpg when flush is not required.
New ''flush'' parameter is added to paging_invlpg, to allow
caller assigning whether flush check is required. It''s
wasteful to always validate shadow linear mapping if caller
doesn''t check return value at all.
Signed-off-by Kevin Tian <kevin.tian@intel.com>
Thanks,
Kevin
2007 Jun 27
10
[PATCH 6/10] Allow vcpu to pause self
Add self pause ability, which is required by vcpu0/dom0 when
running on a AP. This can''t be satisfied by existing interface,
since the new flag also serves as a sync point.
Signed-off-by Kevin Tian <kevin.tian@intel.com>
diff -r d5315422dbc8 xen/common/domain.c
--- a/xen/common/domain.c Mon May 14 18:35:31 2007 -0400
+++ b/xen/common/domain.c Mon May 14 20:21:04 2007 -0400
@@
2007 Jul 19
6
Anyone succeeds HVM on latest x86-64 xen
I tried latest xen and linux-xen staging tree, but failed to run HVM
domain on x86-64 environment. domU creation is OK.
However the weird thing is not HVM domain itself. Instead system
crashed on dom0 context. I saw once with some stack dump that
xen''s page fault handler is executed on a dom0''s stack which then
causes nested page fault due to unable to fetch vcpu pointer.
2008 Mar 17
12
[PATCH]Fix the bug of guest os installation failure and win2k boot failure
Hi, Keir,
This patch is to fix the problem of Linux guest installation failure and Windows 2000 boot failure.
In the early code, we use vmx_vmexit_handler() -> vmx_io_instruction() function to emulate I/O instructions. But now, we use vmx_vmexit_handler() -> handle_mmio -> hvm_emulate_one() -> x86_emulate() to emulate I/O instructions. Also nowadays, the realmode
2007 Aug 29
39
[PATCH] 1/2: cpufreq/PowerNow! in Xen: Time and platform changes
Enable cpufreq support in Xen for AMD Operton processors by:
1) Allowing the PowerNow! driver in dom0 to write to the PowerNow!
MSRs.
2) Adding the cpufreq notifier chain to time-xen.c in dom0.
On a frequency change, a platform hypercall is performed to
scale the frequency multiplier in the hypervisor.
3) Adding a platform hypercall to the hypervisor the scale
the frequency multiplier and reset
2007 May 30
30
[VTD][patch 0/5] HVM device assignment using vt-d
The following 5 patches are re-submissions of the vt-d patch.
This set of patches has been tested against cs# 15080 and is
now much more mature and tested against more environments than
the original patch. Specifically, we have successfully tested
the patch with following environements:
- 32/64-bit Linux HVM guest
- 32-bit Windows XP/Vista (64-bit should work but did not test)
-
2008 Nov 25
7
when timer go back in dom0 save and restore or migrate, PV domain hung
Hi,
I find PV domin hung, When we take those steps
1, save PV domain
2, change system time of PV domain back
3, restore a PV domain
or
1, migrate a PV domain from Machine A to Machine B
2, the system time of Machine B is slower than Machine A.
the problem is wc_sec will be change when system-time chanaged in dom0 or restore in a
2008 Mar 27
11
[PATCH 1/5] Add MSI support to XEN
This patch changes the pirq to be per-domain in xen tree.
Signed-off-by: Jiang Yunhong <yunhong.jiang@intel.com>
Signed-off-by: Shan Haitao <haitao.shan@intel.com>
Best Regards
Shan Haitao
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2008 Jul 14
4
FE driver and log dirty
Here''s a question about FE driver and log dirty, when live migration
is concerned. Log dirty mode is used in live migration, which works
for those polluted pages from CPU issued accesses, but not for
DMA path (as BE driver talked here which access from another
domain like dom0). Most frontend drivers don''t implement their
own suspend interface (netfront implements when
2008 Apr 10
8
[PATCH][RFC]Move PCI Configuration Spaces from Dom0 to Xen
Hi, Keir,
This patch will move reading and writing of PCI configuration spaces
from dom0 to Xen. It also changes VTD code, so that they can touch the
PCI configuration spaces with proper lock.
This will also benefit MSI support in Xen.
Can you give some comments? Thanks!
<<pci_conf_xen.patch>>
Best Regards
Haitao Shan
_______________________________________________
Xen-devel
2006 Apr 21
1
RE: [PATCH]Check the values of MAX_VIRT_CPUS and NR_CPUSfor SMP
>From: Keir Fraser
>Sent: 2006年4月21日 14:41
>
>
>On 21 Apr 2006, at 02:31, Atsushi SAKAI wrote:
>
>> But the logical limit of the IA64 Max CPU is larger than 64.
>> If someone change these values, some possibility make this error
>again.
>>
>> To avoid this problem, I believe this check code should be exists.
>
>See how we solve this on x86 near
2006 Nov 30
4
evtchn_upcall_mask for PV-on-HVM
We seem to find an interesting bug, but not for sure.
Each time before xen returns to hvm domain, it checks
local_events_need_delivery to see whether any events pending and
if yes, inject event by callback_irq into the virtual interrupt
controller.
However, the interesting point is, we never found any place, either in
xen or in PV drivers, to clear evtchn_upcall_mask at any time. The
initial
2007 Sep 11
6
xs transaction
Seems currently xenstore transaction provides confusing message
in different places:
a) comment of xs_transaction_start says "You can only have one
transaction at any time." However do_transaction_start allows up to
10 transactions created, as long as all other existing transaction
channel is idle at the time (conn->transaction == NULL)
b) when multiple transactions can be