Displaying 20 results from an estimated 3000 matches similar to: "RE: [Xen-ia64-devel] RE: [BUNDLE] Testing a simplerinter-domain transport"
2008 May 30
2
relationship of the auto_translated_physmap feature and the shadow_mode_translate mode of domain
2005 Dec 29
1
RE: Guest-visible phys2mach part of Xen arch-neutral API? was: Uses of &frame_table[xfn]
> > Note that the current physical=machine in domain0 is not a
> > design requirement, just the current implementation. The question
> > at hand isn''t whether Xen/ia64 domain0 should be mapped
> > physical=machine,
> > but -- if it is not -- whether the mapping should be guest-visible.
>
> The mapping will need to be guest-visible to allow correct
2011 Sep 23
2
Some problems about xenpaging
Hi, Olaf
we have tested the xenpaging feature and found some problems.
(1) the test case like this : when we start a VM with POD enable, the xenpaging is started at the same time.
this case will cause many problems ,finally, we fixed the BUG, the patch is attached below.
(2) there is a very serious problem. we have observed many VM crash examples, the error code is not always the same.
2008 Jan 31
0
VMX status report. Xen: #16945 & Xen0: #401 -- no new issue
Hi all,
This is today''s nightly testing report; no new issue.
Old Issues:
==============================================
1) Xen booting may hang at "iommu_enable_translation" on 64bit machine.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1151
2) Create hvm guest with base kernel will cause Xen0 crash
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1152
3)
2008 Jul 25
0
Weekly VMX status report. Xen: #18132 & Xen0: #616
Hi all,
Here is our weekly test report for Xen-unstable tree: 3 new issues were
exposed, and 6 old issues got fixed. Due to bug #1304 (qcow issue of
ioemu-remote), our testing is based on ioemu.
New issues:
==============================================
1. With ioemu-remote, guest cannot be created with qcow image.
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1304.
2. Domain0 will
2005 Dec 30
0
RE: Guest-visible phys2mach part of Xen arch-neutral API? was: Uses of &frame_table[xfn]
>From: Keir Fraser
>Sent: 2005年12月30日 4:34
>
>On 29 Dec 2005, at 18:51, Magenheimer, Dan (HP Labs Fort Collins) wrote:
>
>> So then is p==m in dom0 (and driver domains) an unacceptable design
>> alternative for (non-x86) Xen architectures? If it is acceptable,
>> then the question remains:
>
>I think *that* is the critical question here. My feeling is that
2006 Oct 03
1
a domain VTx with the VNIF does hang.
Hi all, my name is Hirofumi Tsujimura.
We are porting and testing a PV-on-HVM in the IPF.
This is a first time to send mail.
I probably found the problem when I tested the VNIF.
My operation for the test is following.
1. create a domain VTx and attach the VNIF in it.
2. create a domain U.
3. send a packet to the domain VTx from the domain U with ping
command.
Then, the domain VTx
2012 Jun 27
0
Re: [PATCH 2 of 4] xen, pod: Zero-check recently populated pages (checklast)
> # HG changeset patch
> # User George Dunlap <george.dunlap@eu.citrix.com>
> # Date 1340815812 -3600
> # Node ID ea827c449088a1017b6e5a9564eb33df70f8a9c6
> # Parent b4e1fec1c98f6cbad666a972f473854518c25500
> xen,pod: Zero-check recently populated pages (checklast)
>
> When demand-populating pages due to guest accesses, check recently
> populated
> pages to see
2007 May 30
0
[PATCH] Exceed maximum number of ioemu''s NIC for VNIF.
Hi All,
We tested the PV driver on HVM domain.
When ten vif was defined in configuration file for VNIF,
the HVM domain was not able to be created.
----------------------------------------------------------
# grep vif RHEL5GA_test.conf
vif = [ ''mac=02:17:42:2f:01:11, bridge=xenbr0'',
''mac=02:17:42:2f:03:11, bridge=xenbr2'',
2013 Sep 06
2
[PATCH] xen: arm: improve VMID allocation.
The VMID field is 8 bits. Rather than allowing only up to 256 VMs per host
reboot before things start "acting strange" instead maintain a simple bitmap
of used VMIDs and allocate them statically to guests upon creation.
This limits us to 256 concurrent VMs which is a reasonable improvement.
Eventually we will want a proper scheme to allocate VMIDs on context switch.
The existing code
2013 Feb 28
1
[PATCH] x86/mm: fix invalid unlinking of nested p2m tables
Commit 90805dc (c/s 26387:4056e5a3d815) ("EPT: Make ept data stucture or
operations neutral") makes nested p2m tables be unlinked from the host
p2m table before their destruction (in p2m_teardown_nestedp2m).
However, by this time the host p2m table has already been torn down,
leading to a possible race condition where another allocation between
the two kinds of table being torn down can
2013 Nov 06
0
[PATCH v5 5/6] xen/arm: Implement hypercall for dirty page tracing
Add hypercall (shadow op: enable/disable and clean/peek dirtied page
bitmap).
It consists of two parts: dirty page detecting and saving.
For detecting, we setup the guest p2m''s leaf PTE read-only and whenever the
guest
tries to write something, permission fault happens and traps into xen.
The permission-faulted GPA should be saved for the toolstack (when it wants
to see
which pages
2008 Feb 01
0
[PATCH] linux/x86: make xen_change_pte_range() compatible with CONFIG_HIGHPTE
Cannot use virt_to_machine() on a kmap()-ed address.
As usual, written and tested on 2.6.24 and made apply to the 2.6.18
tree without further testing.
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Index: head-2008-01-28/arch/i386/mm/hypervisor.c
===================================================================
--- head-2008-01-28.orig/arch/i386/mm/hypervisor.c 2007-10-19
2013 Oct 17
42
[PATCH v8 0/19] enable swiotlb-xen on arm and arm64
Hi all,
this patch series enables xen-swiotlb on arm and arm64.
It has been heavily reworked compared to the previous versions in order
to achieve better performances and to address review comments.
We are not using dma_mark_clean to ensure coherency anymore. We call the
platform implementation of map_page and unmap_page.
We assume that dom0 has been mapped 1:1 (physical address ==
machine
2012 Sep 04
1
[PATCH] xen/p2m: Fix one by off error in checking the P2M tree directory.
We would the full P2M top directory from 0->MAX_DOMAIN_PAGES (inclusive).
Which meant that if the kernel was compiled with MAX_DOMAIN_PAGES=512
we would try to use the 512th entry. Fortunately for us the p2m_top_index
has a check for this:
BUG_ON(pfn >= MAX_P2M_PFN);
which we hit and saw this:
(XEN) domain_crash_sync called from entry.S
(XEN) Domain 0 (vcpu#0) crashed on cpu#0:
(XEN)
2008 Mar 24
21
VMX status report. Xen: #17270 & Xen0: #488 -- no new issue
Hi all,
This is today''s nightly testing report; no new issue today. Most of case
failures are due to bug #1183 and #1194 listed below.
For bug #1194, the issue `Linux booting hangs with "hda: dma..." errors`
got fixed on this c/s; but neither Windows nor Linux X can boot up with
setting sdl=1 and opengl=1 if guest''s resolution is set as 800 * 600 or
higher.
Old
2012 Mar 15
3
[PATCH] arm: allocate top level p2m page for all non-idle VCPUs
Not just dom0.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
xen/arch/arm/domain.c | 3 +++
xen/arch/arm/domain_build.c | 3 ---
xen/arch/arm/p2m.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 5702399..4b38790 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@
2008 Sep 26
1
Xen and DHCP problem
Hey folks!
I want to add a CentOS 5.2 domU to an existing server (Fedora 8) which was setup by another company. The Dom0 does not provide any other services but a BIND-DHCP-server (they deleted dnsmasq and installed bind). Don''t ask my why they did that.
My new DomU is connected to eth0, which is a bridge. BIND also listens on eth0 to serve other machines.
I start the install of centos
2008 May 23
6
VMX status report. Xen: #17702 & Xen0: #559 -- no new issue
Hi all,
This is today''s nightly testing report; no new issue found, bug #1259
got fixed.
Some vt-d cases failed in first round testing, but can get pass in
retesting.
Fixed issue:
==============================================
1. booting windows guest causes Xen HV crash
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1259
Old issues:
2007 Mar 09
0
Full testing agianst xen usntable 14201
Hi all
Intel QA team did a full testing agianst xen unstable 14201 this week.
In the testing we ran all our tests for device model,control panel,
guest installation, and xen function on x86p,x86_64 and IPF.
Here is the testing report:
########################################################################
###
Summary:
Totally 12 new issues have been found (11 for x86, 1 for IPF):
x86:
1. Mouse