similar to: [PATCH RFC v13 00/20] Introduce PVH domU support

Displaying 20 results from an estimated 1000 matches similar to: "[PATCH RFC v13 00/20] Introduce PVH domU support"

2013 Nov 18
6
[PATCH RFC v2] pvh: clearly specify used parameters in vcpu_guest_context
The aim of this patch is to define a stable way in which PVH is going to do AP bringup. Since we are running inside of a HVM container, PVH should only need to set flags, cr3 and user_regs in order to bring up a vCPU, the rest can be set once the vCPU is started using the bare metal methods. Additionally, the guest can also set cr0 and cr4, and those values will be appended to the default values
2013 Dec 06
36
[V6 PATCH 0/7]: PVH dom0....
Hi, V6: The only change from V5 is in patch #6: - changed comment to reflect autoxlate - removed a redundant ASSERT - reworked logic a bit so that get_page_from_gfn() is called with NULL for p2m type as before. arm has ASSERT wanting it to be NULL. Tim: patch 4 needs your approval. Daniel: patch 5 needs your approval. These patches implement PVH dom0. Patches 1 and 2
2013 Jan 19
21
[PATCH]: PVH: specify xen features strings cleany for PVH
On Thu, 17 Jan 2013 22:22:47 -0500 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote: > Jan had some comments about that patch: > > https://patchwork.kernel.org/patch/1745041/ > > Please fix it up so I can put it in the Linux tree. Please see below. Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> Thanks, Mukesh diff --git a/arch/x86/xen/xen-head.S
2012 Mar 20
5
[hybrid]: hang in update_wall_time
Hi Ian/Stefano: I changed over to the PV clock for hybrid liked we talked at the hackathon. I still have the hang in update_wall_time() after dom0 switches to xen as clocksource. The source of hang seems to be in xen stime_local_stamp in cpu_time that suddenly jumps to a large 64bit value. I''ve been chasing to figure where that happens, and why for the hybrid and not PV. It appears the
2013 Jan 30
2
[PATCH] PVH: remove code to map iomem from guest
It was decided during xen patch review that xen map the iomem transparently, so remove xen_set_clr_mmio_pvh_pte() and the sub hypercall PHYSDEVOP_map_iomem. --- arch/x86/xen/mmu.c | 14 -------------- arch/x86/xen/setup.c | 16 ++++------------ include/xen/interface/physdev.h | 10 ---------- 3 files changed, 4 insertions(+), 36 deletions(-) diff --git
2013 Feb 28
1
[PATCH v2] arch/x86/xen: remove depends on CONFIG_EXPERIMENTAL
The CONFIG_EXPERIMENTAL config item has not carried much meaning for a while now and is almost always enabled by default. As agreed during the Linux kernel summit, remove it from any "depends on" lines in Kconfigs. Signed-off-by: Kees Cook <keescook at chromium.org> Cc: Stefano Stabellini <stefano.stabellini at eu.citrix.com> Cc: Mukesh Rathor <mukesh.rathor at
2013 Feb 28
1
[PATCH v2] arch/x86/xen: remove depends on CONFIG_EXPERIMENTAL
The CONFIG_EXPERIMENTAL config item has not carried much meaning for a while now and is almost always enabled by default. As agreed during the Linux kernel summit, remove it from any "depends on" lines in Kconfigs. Signed-off-by: Kees Cook <keescook at chromium.org> Cc: Stefano Stabellini <stefano.stabellini at eu.citrix.com> Cc: Mukesh Rathor <mukesh.rathor at
2013 Feb 23
1
[PATCH] arch/x86/xen: remove depends on CONFIG_EXPERIMENTAL
The CONFIG_EXPERIMENTAL config item has not carried much meaning for a while now and is almost always enabled by default. As agreed during the Linux kernel summit, remove it from any "depends on" lines in Kconfigs. Signed-off-by: Kees Cook <keescook at chromium.org> Cc: Stefano Stabellini <stefano.stabellini at eu.citrix.com> Cc: Mukesh Rathor <mukesh.rathor at
2013 Feb 23
1
[PATCH] arch/x86/xen: remove depends on CONFIG_EXPERIMENTAL
The CONFIG_EXPERIMENTAL config item has not carried much meaning for a while now and is almost always enabled by default. As agreed during the Linux kernel summit, remove it from any "depends on" lines in Kconfigs. Signed-off-by: Kees Cook <keescook at chromium.org> Cc: Stefano Stabellini <stefano.stabellini at eu.citrix.com> Cc: Mukesh Rathor <mukesh.rathor at
2013 May 16
5
xc_map_foreign_bulk() memory leak in ARM version?
Hi Xen folks! I''ve faced with one strange thing in ARM version of Xen: when I use xc_map_foreign_bulk() to map some memory from domU to dom0, after unmap() for previous returned address - memory is not freed at all. Let''s look at call stack: xc_map_foreign() -> linux_privcmd_map_foreign_bulk() -> { addr = mmap(fd); ioctl(fd, IOCTL_PRIVCMD_MMAPBATCH_V2 );
2013 Jul 23
73
Bug: Limitation of <=2GB RAM in domU persists with 4.3.0
I just built 4.3.0 in order to get > 2GB of RAM in domU with GPU passthrough without crashes. Unfortunately, the same crashes still happen. Massive frame buffer corruption on domU before it locks up solid. It seems the PCI memory stomp is still happening. I am using qemu-dm, as I did on Xen 4.2.x. So whatever fix for this went into 4.3.0 didn''t fix it for me. Passing less than 2GB
2013 Jan 28
16
PVH questions
Hello, I''ve had a look at PVH support, and I have a few questions: - events are still dispatched the PV way through the callback, right? - I guess FPU errors don''t trigger an INT13, so I don''t need to handle that? - How about the console and store MFNs from the boot info? Are they still MFNs, or actually PFNs? - How about PV network in non-copy mode? It used to be
2013 Jun 04
13
[PATCH] x86/vtsc: update vcpu_time after hvm_set_guest_time
When using a vtsc, hvm_set_guest_time changes hvm_vcpu.stime_offset, which is used in the vcpu time structure to calculate the tsc_timestamp, so after updating stime_offset we need to propagate the change to vcpu_time in order for the guest to get the right time if using the PV clock. This was not done correctly, since in context_switch update_vcpu_system_time was called before vmx_do_resume,
2012 Apr 26
3
[help]: VPID tagged TLBs question.
Hi, (Assume VPID is available and enabled.) I''m trying to figure the TLB stuff with VPIDs. I understand from the poorly written chapter in the intel manual that if an HVM vcpu is running then only the TLBs tagged with the vcpu.VPID will be used. If xen or a PV guest is running, then VPID 0 TLBs are what will be used. Now I understand the hvm_asid_flush_vcpu upon new guest cr3, will
2012 Oct 24
7
[PATCH 4/5] xen: arm: implement remap interfaces needed for privcmd mappings.
We use XENMEM_add_to_physmap_range which is the preferred interface for foreign mappings. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> --- arch/arm/include/asm/xen/interface.h | 1 + arch/arm/xen/enlighten.c | 100 +++++++++++++++++++++++++++++++++- arch/x86/include/asm/xen/interface.h | 1 + include/xen/interface/memory.h | 18 ++++++ 4 files changed,
2012 Aug 29
4
xen debugger (kdb/xdb/hdb) patch for c/s 25467
Hi Guys, Thanks for the interest in the xen hypervisor debugger, prev known as kdb. Btw. I''m gonna rename it to xdb for xen-debugger or hdb for hypervisor debugger. KDB is confusing people with linux kdb debugger and I often get emails where people think they need to apply linux kdb patch also... Anyways, attaching patch that is cleaned up of my debug code that I accidentally left in
2013 Dec 13
4
Xen 4.4 development update: Is PVH a blocker?
This information will be mirrored on the Xen 4.4 Roadmap wiki page: http://wiki.xen.org/wiki/Xen_Roadmap/4.4 Our timeline had us start the code freeze last Friday. However, we have not released an RC0 because we have been waiting for PVH dom0 support. Adding bug fixes during RCs makes sense, but RC0 should contain all of the functionality we expect to be in the final release. PVH dom0 support
2009 Oct 23
11
soft lockups during live migrate..
Trying to migrate a 64bit PV guest with 64GB running medium to heavy load on xen 3.4.0, it is showing lot of soft lockups. The softlockups are causing dom0 reboot by the cluster FS. The hardware has 256GB and 32 CPUs. Looking into the hypervisor thru kdb, I see one cpu in sh_resync_all() while all other 31 appear spinning on the shadow_lock. I vaguely remember seeing some thread on this while
2013 Jan 12
0
[RFC PATCH 4/16]: PVH xen: add params to read_segment_register
In this patch, we change read_segment_register to take vcpu and regs parameters for PVH (in upcoming patches). No functionality change. also, make emulate_privileged_op() public for later. Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> diff -r 93d95f6dd693 -r 0339f85f6068 xen/arch/x86/domain.c --- a/xen/arch/x86/domain.c Fri Jan 11 16:22:57 2013 -0800 +++ b/xen/arch/x86/domain.c
2008 Oct 07
6
A race condition introduced by changeset 15175: Re-init hypercall stubs page after HVM save/restore
For an SMP Linux HVM guest with PV drivers inserted, when we do save/restore (or LiveMigration) for the guest, it might panic after it''s restored. The panic point is inside ap_suspend(): .... while (info->do_spin) { cpu_relax(); read_lock(&suspend_lock); HYPERVISOR_yield(); ----> guest might panic on the invocation of this function.