similar to: About ISA DMA

Displaying 20 results from an estimated 20000 matches similar to: "About ISA DMA"

2007 Feb 14
2
[PATCH 8/8] 2.6.17: scan DMI early
While shuffling quite a few things around, this gets us closer to native, which clearly had a reason to do the DMI scan early. Signed-off-by: Jan Beulich <jbeulich@novell.com> Index: head-2007-02-08/arch/i386/mm/ioremap-xen.c =================================================================== --- head-2007-02-08.orig/arch/i386/mm/ioremap-xen.c 2007-02-08 17:07:13.000000000 +0100 +++
2007 Apr 18
2
Time to post some patches?
Looks to me like the first series of patches should be OK to post now. I propose that: 001-apply-to-page-range.patch 001a-reboot-use-struct.patch 002-sync-bitops.patch 003-remove-ring0-assumptions.patch 004-abstract-asm.patch 005-cpuid-cleanup.patch unfix-fixmap.patch fixmap-bootparam.patch remove-read-hazard-from-cow.patch pte-clear-not-present.patch
2008 Oct 17
6
[PATCH, RFC] i386: highmem access assistance hypercalls
While looking at the origin of very frequently executed hypercalls I realized that the high page accessor functions in Linux would be good candidates to handle in the hypervisor - clearing or copying to/from a high page is a pretty frequent operation (provided there''s enough memory in the domain). While prior to the first submission I only measured kernel builds (where the results are not
2008 Nov 25
7
when timer go back in dom0 save and restore or migrate, PV domain hung
Hi, I find PV domin hung, When we take those steps 1, save PV domain 2, change system time of PV domain back 3, restore a PV domain or 1, migrate a PV domain from Machine A to Machine B 2, the system time of Machine B is slower than Machine A. the problem is wc_sec will be change when system-time chanaged in dom0 or restore in a
2012 Sep 11
2
[PATCH RFC 5/8] ns16550: MMIO adjustments
On x86 ioremap() is not suitable here, set_fixmap() must be used instead. Also replace some literal numbers by their proper symbolic constants, making the code easier to understand. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- a/xen/drivers/char/ns16550.c +++ b/xen/drivers/char/ns16550.c @@ -20,6 +20,9 @@ #include <xen/pci.h> #include <xen/pci_regs.h> #include
2013 Nov 18
6
[PATCH RFC v2] pvh: clearly specify used parameters in vcpu_guest_context
The aim of this patch is to define a stable way in which PVH is going to do AP bringup. Since we are running inside of a HVM container, PVH should only need to set flags, cr3 and user_regs in order to bring up a vCPU, the rest can be set once the vCPU is started using the bare metal methods. Additionally, the guest can also set cr0 and cr4, and those values will be appended to the default values
2008 Mar 27
11
[PATCH 1/5] Add MSI support to XEN
This patch changes the pirq to be per-domain in xen tree. Signed-off-by: Jiang Yunhong <yunhong.jiang@intel.com> Signed-off-by: Shan Haitao <haitao.shan@intel.com> Best Regards Shan Haitao _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2009 Apr 16
9
Second release candidate for Xen 3.4.0
Folks, The second release candidate for Xen 3.4.0 is available at http://xenbits.xensource.com/xen-unstable.hg, tagged as ''3.4.0-rc2''. Please test! -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2007 Oct 17
7
[VTD][RESEND]add a timer for the shared interrupt issue for vt-d
Keir, It''s a resending patch for the timeout mechanism to deal with the shared interrupt issue for vt-d enabled hvm guest. We modify the patch following your comments last time and make some other small fix: 1) We don''t touch the locking around the hvm_dpci_eoi(). 2) Remove the HZ from the TIME_OUT_PERIOD macro which may confuse others. 3) Add some
2012 Oct 11
14
alloc_heap_pages is low efficient with more CPUs
I am confused with a problem: I have a blade with 64 physical CPUs and 64G physical RAM, and defined only one VM with 1 CPU and 40G RAM. For the first time I started the VM, it just took 3s, But for the second starting it took 30s. After studied it by printing log, I have located a place in the hypervisor where cost too much time, occupied 98% of the whole starting time. xen/common/page_alloc.c
2007 Jan 30
45
[PATCH] Fix softlockup issue after vcpu hotplug
Stamp softlockup thread earlier before do_timer, because the latter is the one to actually trigger lock warning for long-time offline. Or else, I obserevd softlockup warning easily at manual vcpu hot-remove/plug, or when suspend cancel into old context. One point here is to cover both stolen and blocked time to compare with offline threshold. vcpu hotplug falls into ''stolen''
2005 May 24
28
Xen & Transmeta (from xen-users)
All- With suggestions from Ian and previous posts on this list, I''ve been investigating why Xen causes a Transmeta-based system to reboot immediately.. I''ve added instrumentation to xen/arch/x86/boot/x86_32.S (a collection of ''.asciz "foo"'' statements) hoping to locate a point of failure.. and it dies sometime before this code is run.. At what point
2007 Sep 30
6
[VTD][PATCH] a time out mechanism for the shared interrupt issue for vtd
Attached is a patch for shared interrupt between dom0 and HVM domain for vtd. Most of problem is caused by that we should inject interrupt to both domains and the physical interrupt deassertion then may be delayed by the device assigned to the HVM. The patch adds a timer, and the time out value is sufficient large to tolerant the delaying used to wait for the physical interrupt deassertion.
2007 Feb 13
7
Taken fault at bad CS c000...
Just saw such warnings like: ... (XEN) printk: 387824 messages suppressed. (XEN) seg_fixup.c:282: Taken fault at bad CS c000, IP 00003aab (XEN) seg_fixup.c:282: Taken fault at bad CS c000, IP 00003ab2 (XEN) seg_fixup.c:282: Taken fault at bad CS c000, IP 00003aab (XEN) seg_fixup.c:282: Taken fault at bad CS c000, IP 00003ab2 ... It only jumped out when switching to/off X-windows within dom0, and
2008 Feb 03
5
[PATCH] Simplify paging_invlpg when flush is not required.
Simplify paging_invlpg when flush is not required. New ''flush'' parameter is added to paging_invlpg, to allow caller assigning whether flush check is required. It''s wasteful to always validate shadow linear mapping if caller doesn''t check return value at all. Signed-off-by Kevin Tian <kevin.tian@intel.com> Thanks, Kevin
2013 Feb 21
2
[PATCH] xen: consolidate implementations of LOG() macro
arm64 is going to add another one shortly, so take control now. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: keir@xen.org Cc: jbeulich@suse.com Cc: tim@xen.org --- xen/arch/arm/arm32/asm-offsets.c | 8 +------- xen/arch/x86/x86_64/asm-offsets.c | 8 +------- xen/include/xen/bitops.h | 7 +++++++ 3 files changed, 9 insertions(+), 14 deletions(-) diff --git
2007 May 30
30
[VTD][patch 0/5] HVM device assignment using vt-d
The following 5 patches are re-submissions of the vt-d patch. This set of patches has been tested against cs# 15080 and is now much more mature and tested against more environments than the original patch. Specifically, we have successfully tested the patch with following environements: - 32/64-bit Linux HVM guest - 32-bit Windows XP/Vista (64-bit should work but did not test) -
2007 Jan 26
12
[Patch] the interface of invalidating qemu mapcache
HVM balloon driver or something, that''s under development, may decrease or increase the machine memory that is taken by HVM guest; in IA32/IA32e host, now Qemu maps the physical memory of HVM guest based on little blocks of memory (the block size is 64K in IA32 host or 1M in IA32E host). When HVM balloon driver decreases the reserved machine memory of HVM guest, Qemu should unmap the
2008 Nov 27
1
Re: RE: Re: Re: when timer go back in dom0 save and restore ormigrate, PV domain hung
F.Y.I >>> "Tian, Kevin" <kevin.tian@intel.com> 08.11.27. 11:50 >>>Sorry for a typo. I did mean domU instead of dom0. :-) The point here is that time_resume will sync to new system time and wall clock at restore, and thus pv guest should be able to continue... Xen system time is not wallclock time which just counts up from power up. As Keir points out, only its
2007 Jul 19
6
Anyone succeeds HVM on latest x86-64 xen
I tried latest xen and linux-xen staging tree, but failed to run HVM domain on x86-64 environment. domU creation is OK. However the weird thing is not HVM domain itself. Instead system crashed on dom0 context. I saw once with some stack dump that xen''s page fault handler is executed on a dom0''s stack which then causes nested page fault due to unable to fetch vcpu pointer.