similar to: map_domain_page

Displaying 20 results from an estimated 400 matches similar to: "map_domain_page"

2008 Jul 24
2
[RFC] i386 highmem assist hypercalls
While looking at the origin of very frequently executed hypercalls (as usual, kernel builds are what''s being measured), I realized that the high page accessor functions in Linux would be good candidates to handle in the hypervisor - clearing or copying to/from a high page is a pretty frequent operation (provided there''s enough memory). However, the measured results
2011 Jan 21
11
[PATCH]x86:x2apic: Disable x2apic on x86-32 permanently
x86:x2apic: Disable x2apic on x86-32 permanently x2apic initialization on x86_32 uses vcpu pointer before it is initialized. As x2apic is unlikely to be used on x86_32, this patch disables x2apic permanently on x86_32. It also asserts the sanity of vcpu pointer before dereference to prevent further misuse. Signed-off-by: Fengzhe Zhang <fengzhe.zhang@intel.com> diff -r 02c0af2bf280
2008 Oct 17
6
[PATCH, RFC] i386: highmem access assistance hypercalls
While looking at the origin of very frequently executed hypercalls I realized that the high page accessor functions in Linux would be good candidates to handle in the hypervisor - clearing or copying to/from a high page is a pretty frequent operation (provided there''s enough memory in the domain). While prior to the first submission I only measured kernel builds (where the results are not
2007 Sep 21
5
[NEO 1:1] Nativedom 1:1 Mapping
This patch applies to c/s #15522. Nativedom 1:1 memory enabling - Done by "stealing" memory from Xen''s e820 at boot time. The pages are later being allocated to NativeDom using a special allocator. x86-64 ====== The 512KB-1MB region is remapped (because of the ROMs) to an address above 16MB. As far as NativeDom can see: 1. The 0-512KB
2007 May 30
30
[VTD][patch 0/5] HVM device assignment using vt-d
The following 5 patches are re-submissions of the vt-d patch. This set of patches has been tested against cs# 15080 and is now much more mature and tested against more environments than the original patch. Specifically, we have successfully tested the patch with following environements: - 32/64-bit Linux HVM guest - 32-bit Windows XP/Vista (64-bit should work but did not test) -
2006 Feb 27
0
RE: Re: Will map_domain_page return NULL when fails onx86_32?
>>>> Hi Keir, just a curious question, will map_domain_page >>> return NULL when >>>> fails on x86_32? If not, why? >>>> thanks >>> >>> I don''t expect that it should ever fail. It''s used for temporary >>> mappings (e.g., scope of a function) and so even though the mapping >>> space is finite, we
2013 Jun 18
16
[PATCH] ARM: cache coherence problem in guestcopy.c
I''ve encountered a rather unusual bug while I''m implementing live migration on arndale board. After resume, domU kernel starts invoking hypercalls and at some point the hypercall parameters delivered to xen are corrupted. After some debugging (with the help of HW debugger), I found that cache polution happens, and here is the detailed sequence. 1) DomU kernel allocates a local
2013 Dec 04
5
[PATCH] coverity: Store the modelling file in the source tree.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> CC: Keir Fraser <keir@xen.org> CC: Jan Beulich <JBeulich@suse.com> CC: Tim Deegan <tim@xen.org> CC: Ian Campbell <Ian.Campbell@citrix.com> CC: Ian Jackson <Ian.Jackson@eu.citrix.com> --- misc/coverity_model.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+)
2013 Dec 02
3
Assertion ''l1e_get_pfn(MAPCACHE_L1ENT(hashent->idx)) == hashent->mfn'' failed at domain_page.c:203
Today is my day! This is with Xen 4.4 (pulled today) when I build a kernel in dom0 and have two guests launching at the same time. This is what I get: (XEN) Assertion ''l1e_get_pfn(MAPCACHE_L1ENT(hashent->idx)) == hashent->mfn'' failed at domain_page.c:203 and it blows up. Here is the full log: \ \/ /___ _ __ | || | | || | _ _ _ __ ___| |_ __ _| |__ | | ___
2007 May 31
4
[RFC][PATCH 4/6] HVM PCI Passthrough (non-IOMMU)
int.patch: - Supports only level-triggered interrupts. Edge interrupts support will be added shortly (should be fairly simple) - Change polarity trick: in order to reflect the external device''s assertion state, the ioapic pin gets its polarity changed whenever an interrupt occur. So an interrupt is generated when the _external_ line is asserted (then,
2011 Mar 25
2
[RFC PATCH 2/3] AMD IOMMU: Implement p2m sharing
-- Advanced Micro Devices GmbH Sitz: Dornach, Gemeinde Aschheim, Landkreis München Registergericht München, HRB Nr. 43632 WEEE-Reg-Nr: DE 12919551 Geschäftsführer: Alberto Bozzo, Andrew Bowd _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2008 Aug 21
2
doubt on releasing domain pages
Hi, I am trying to release domU pages from page_list and xenpage_list after domU shutdown while retaining the rest of the domain information. To achieve this in __domain_finalise_shutdown i call domain_relinquish_resources. This is failing to release pages from page_list for type PGT_l2_page_tables and crashing dom0. To be specific, while testing on mini-os i saw that when
2007 Jan 08
6
Xen 3.0.4 - Ballooning
Hi, Maybe it''s a dumb question, but I''m actually trying to understand how the memory allocation works within Xen. I try to give 128MB to a domU and see if it increases for example when I "nano" a 500mb file, but the process just get killed when it reachs the 128MB memory limit. How do I configure the guests so they can ask for more memory until a limit is reached
2013 Nov 22
2
[PATCH v2 02/15] xen: arm64: Add Basic Platform support for APM X-Gene Storm.
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org> This patch adds initial platform stubs for APM X-Gene. Signed-off-by: Anup Patel <anup.patel@linaro.org> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org> Drop earlyprintk (split into earlier patch). Only build on ARM64. Drop empty init and reset hooks and enable 1:1 workaround. Signed-off-by: Ian
2008 Jun 27
2
PCI device assignment to guests (userspace)
Userspace patches for the pci-passthrough functionality. The major updates since the last post are: - Loop to add passthrough devices in pc_init1 - Handle errors in read/write calls - Allow invocation without irq number for in-kernel irqchip Other than this, several small things were fixed according to review comments received last time.
2008 Jun 27
2
PCI device assignment to guests (userspace)
Userspace patches for the pci-passthrough functionality. The major updates since the last post are: - Loop to add passthrough devices in pc_init1 - Handle errors in read/write calls - Allow invocation without irq number for in-kernel irqchip Other than this, several small things were fixed according to review comments received last time.
2011 Sep 01
4
[xen-unstable test] 8803: regressions - FAIL
flight 8803 xen-unstable real [real] http://www.chiark.greenend.org.uk/~xensrcts/logs/8803/ Regressions :-( Tests which did not succeed and are blocking: test-amd64-i386-rhel6hvm-intel 4 xen-install fail REGR. vs. 8769 test-amd64-i386-xl 4 xen-install fail REGR. vs. 8769 test-amd64-i386-pair 6 xen-install/dst_host fail REGR. vs. 8769
2007 Oct 10
3
Multiple PCI bus support
Hi, I saw that Xen support a translation between device/intx to GSI for a single PCI bus, I thought about adding multiple PCI bus support but disregard the bus information so the same device/intx on different buses will be OR wired to the same GSI, sounds reasonable? What other things do I need to support in Xen in order to add multiple PCI buses, assuming that secondary buses holds only
2013 Sep 06
2
[PATCH] xen: arm: improve VMID allocation.
The VMID field is 8 bits. Rather than allowing only up to 256 VMs per host reboot before things start "acting strange" instead maintain a simple bitmap of used VMIDs and allocate them statically to guests upon creation. This limits us to 256 concurrent VMs which is a reasonable improvement. Eventually we will want a proper scheme to allocate VMIDs on context switch. The existing code
2011 Sep 23
2
Some problems about xenpaging
Hi, Olaf we have tested the xenpaging feature and found some problems. (1) the test case like this : when we start a VM with POD enable, the xenpaging is started at the same time. this case will cause many problems ,finally, we fixed the BUG, the patch is attached below. (2) there is a very serious problem. we have observed many VM crash examples, the error code is not always the same.