similar to: [PATCH 4/5] xen: arm: implement remap interfaces needed for privcmd mappings.

Displaying 20 results from an estimated 600 matches similar to: "[PATCH 4/5] xen: arm: implement remap interfaces needed for privcmd mappings."

2012 Oct 04
49
[RFC 00/14] arm: implement ballooning and privcmd foreign mappings based on x86 PVH
This series implements ballooning for Xen on ARM and builds and Mukesh''s PVH privcmd stuff to implement foreign page mapping on ARM, replacing the old "HACK: initial (very hacky) XENMAPSPACE_gmfn_foreign" patch. The baseline is a bit complex, it is basically Stefano''s xenarm-forlinus branch (commit bbd6eb29214e) merged with Konrad''s linux-next-pvh branch
2013 Dec 06
36
[V6 PATCH 0/7]: PVH dom0....
Hi, V6: The only change from V5 is in patch #6: - changed comment to reflect autoxlate - removed a redundant ASSERT - reworked logic a bit so that get_page_from_gfn() is called with NULL for p2m type as before. arm has ASSERT wanting it to be NULL. Tim: patch 4 needs your approval. Daniel: patch 5 needs your approval. These patches implement PVH dom0. Patches 1 and 2
2013 Jan 19
21
[PATCH]: PVH: specify xen features strings cleany for PVH
On Thu, 17 Jan 2013 22:22:47 -0500 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote: > Jan had some comments about that patch: > > https://patchwork.kernel.org/patch/1745041/ > > Please fix it up so I can put it in the Linux tree. Please see below. Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com> Thanks, Mukesh diff --git a/arch/x86/xen/xen-head.S
2012 Apr 26
3
[help]: VPID tagged TLBs question.
Hi, (Assume VPID is available and enabled.) I''m trying to figure the TLB stuff with VPIDs. I understand from the poorly written chapter in the intel manual that if an HVM vcpu is running then only the TLBs tagged with the vcpu.VPID will be used. If xen or a PV guest is running, then VPID 0 TLBs are what will be used. Now I understand the hvm_asid_flush_vcpu upon new guest cr3, will
2008 Jun 19
0
[PATCH] ia64/xen: introduce definitions necessary for ia64/xen hypercalls.
import include/asm-ia64/xen/interface.h to introduce introduce definitions necessary for ia64/xen hypercalls. They are basic structures to communicate with xen hypervisor and will be used later. Cc: Robin Holt <holt at sgi.com> Cc: Jeremy Fitzhardinge <jeremy at goop.org> Signed-off-by: Isaku Yamahata <yamahata at valinux.co.jp> Cc: "Luck, Tony" <tony.luck at
2008 Jun 19
0
[PATCH] ia64/xen: introduce definitions necessary for ia64/xen hypercalls.
import include/asm-ia64/xen/interface.h to introduce introduce definitions necessary for ia64/xen hypercalls. They are basic structures to communicate with xen hypervisor and will be used later. Cc: Robin Holt <holt at sgi.com> Cc: Jeremy Fitzhardinge <jeremy at goop.org> Signed-off-by: Isaku Yamahata <yamahata at valinux.co.jp> Cc: "Luck, Tony" <tony.luck at
2012 Aug 29
4
xen debugger (kdb/xdb/hdb) patch for c/s 25467
Hi Guys, Thanks for the interest in the xen hypervisor debugger, prev known as kdb. Btw. I''m gonna rename it to xdb for xen-debugger or hdb for hypervisor debugger. KDB is confusing people with linux kdb debugger and I often get emails where people think they need to apply linux kdb patch also... Anyways, attaching patch that is cleaned up of my debug code that I accidentally left in
2012 Mar 20
5
[hybrid]: hang in update_wall_time
Hi Ian/Stefano: I changed over to the PV clock for hybrid liked we talked at the hackathon. I still have the hang in update_wall_time() after dom0 switches to xen as clocksource. The source of hang seems to be in xen stime_local_stamp in cpu_time that suddenly jumps to a large 64bit value. I''ve been chasing to figure where that happens, and why for the hybrid and not PV. It appears the
2009 Oct 23
11
soft lockups during live migrate..
Trying to migrate a 64bit PV guest with 64GB running medium to heavy load on xen 3.4.0, it is showing lot of soft lockups. The softlockups are causing dom0 reboot by the cluster FS. The hardware has 256GB and 32 CPUs. Looking into the hypervisor thru kdb, I see one cpu in sh_resync_all() while all other 31 appear spinning on the shadow_lock. I vaguely remember seeing some thread on this while
2008 Oct 07
6
A race condition introduced by changeset 15175: Re-init hypercall stubs page after HVM save/restore
For an SMP Linux HVM guest with PV drivers inserted, when we do save/restore (or LiveMigration) for the guest, it might panic after it''s restored. The panic point is inside ap_suspend(): .... while (info->do_spin) { cpu_relax(); read_lock(&suspend_lock); HYPERVISOR_yield(); ----> guest might panic on the invocation of this function.
2013 Nov 18
6
[PATCH RFC v2] pvh: clearly specify used parameters in vcpu_guest_context
The aim of this patch is to define a stable way in which PVH is going to do AP bringup. Since we are running inside of a HVM container, PVH should only need to set flags, cr3 and user_regs in order to bring up a vCPU, the rest can be set once the vCPU is started using the bare metal methods. Additionally, the guest can also set cr0 and cr4, and those values will be appended to the default values
2012 May 04
9
[hybrid]: unable to boot hvm due to eflags.ID
Hi guys, At a loss trying to figure why if (has_eflag(X86_EFLAGS_ID)) returns false in my HVM domU. Standard function has_eflag() in cpucheck.c running in real mode. Works fine on PV dom0, but fails when guest is booting on my hybrid dom0. LMK if any ideas. I''ll keep digging in the manuals, but nothing so far. thanks, Mukesh
2012 Aug 10
18
[PATCH v2 0/5] ARM hypercall ABI: 64 bit ready
Hi all, this patch series makes the necessary changes to make sure that the current ARM hypercall ABI can be used as-is on 64 bit ARM platforms: - it defines xen_ulong_t as uint64_t on ARM; - it introduces a new macro to handle guest pointers, called XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to have size 8 bytes on aarch64); - it replaces all the occurrences of
2013 Sep 23
57
[PATCH RFC v13 00/20] Introduce PVH domU support
This patch series is a reworking of a series developed by Mukesh Rathor at Oracle. The entirety of the design and development was done by him; I have only reworked, reorganized, and simplified things in a way that I think makes more sense. The vast majority of the credit for this effort therefore goes to him. This version is labelled v13 because it is based on his most recent series, v11.
2010 May 21
10
increase evtchn limits
Hi, I''m trying to boot up with lot more than 32 vcpus on this very large box. I overcame vcpu_info[MAX_VIRT_CPUS] by doing vcpu placement hypercall in guest, but now running into evt channel limit (lots of devices): unsigned long evtchn_pending[sizeof(unsigned long) * 8]; which limits to 512 max for my 64bit dom0. The only recourse seems to create a new struct shared_info_v2{},
2013 Dec 04
5
[PATCH] arm: xen: foreign mapping PTEs are special.
These mappings are in fact special and require special handling in privcmd, which already exists. Failure to mark the PTE as special on arm64 causes all sorts of bad PTE fun. x86 already gets this correct. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: xen-devel@lists.xenproject.org --- arch/arm/xen/enlighten.c |
2011 Sep 01
3
DOM0 Hang on a large box....
Hi, I''m looking at a system hang on a large box: 160 cpus, 2TB. Dom0 is booted with 160 vcpus (don''t ask me why :)), and an HVM guest is started with over 1.5T RAM and 128 vcpus. The system hangs without much activity after couple hours. Xen 4.0.2 and 2.6.32 based 64bit dom0. During hang I discovered: Most of dom0 vcpus are in double_lock_balance spinning on one of the locks:
2013 Jan 30
2
[PATCH] PVH: remove code to map iomem from guest
It was decided during xen patch review that xen map the iomem transparently, so remove xen_set_clr_mmio_pvh_pte() and the sub hypercall PHYSDEVOP_map_iomem. --- arch/x86/xen/mmu.c | 14 -------------- arch/x86/xen/setup.c | 16 ++++------------ include/xen/interface/physdev.h | 10 ---------- 3 files changed, 4 insertions(+), 36 deletions(-) diff --git
2013 May 16
5
xc_map_foreign_bulk() memory leak in ARM version?
Hi Xen folks! I''ve faced with one strange thing in ARM version of Xen: when I use xc_map_foreign_bulk() to map some memory from domU to dom0, after unmap() for previous returned address - memory is not freed at all. Let''s look at call stack: xc_map_foreign() -> linux_privcmd_map_foreign_bulk() -> { addr = mmap(fd); ioctl(fd, IOCTL_PRIVCMD_MMAPBATCH_V2 );
2009 Jan 31
2
Re: Debugging Xen via serial console
Hi, kdb: to debug xen hypervisor, could also debug guests gdbsx: to debug PV/HVM linux guests The tree is : http://xenbits.xensource.com/ext/debuggers.hg See README-dbg. You''ll need to setup serial access for kdb. Thanks, Mukesh > > Hi Dan, > > I''m currently using your version of ssplitd as it is. I haven''t tried > kdb. For some reason I