Displaying 20 results from an estimated 5000 matches similar to: "increase evtchn limits"
2010 Mar 11
1
[PATCH 1/2] xen: balloon: Fix checkpatch issues
drivers/xen/balloon.c:50: WARNING: Use #include <linux/uaccess.h> instead of <asm/uaccess.h>
drivers/xen/balloon.c:158: ERROR: else should follow close brace '}'
drivers/xen/balloon.c:277: ERROR: do not use assignment in if condition
drivers/xen/balloon.c:293: ERROR: code indent should use tabs where possible
drivers/xen/balloon.c:364: ERROR: that open brace { should be on the
2010 Mar 11
1
[PATCH 1/2] xen: balloon: Fix checkpatch issues
drivers/xen/balloon.c:50: WARNING: Use #include <linux/uaccess.h> instead of <asm/uaccess.h>
drivers/xen/balloon.c:158: ERROR: else should follow close brace '}'
drivers/xen/balloon.c:277: ERROR: do not use assignment in if condition
drivers/xen/balloon.c:293: ERROR: code indent should use tabs where possible
drivers/xen/balloon.c:364: ERROR: that open brace { should be on the
2010 Mar 11
1
[PATCH 1/2] xen: balloon: Fix checkpatch issues
drivers/xen/balloon.c:50: WARNING: Use #include <linux/uaccess.h> instead of <asm/uaccess.h>
drivers/xen/balloon.c:158: ERROR: else should follow close brace '}'
drivers/xen/balloon.c:277: ERROR: do not use assignment in if condition
drivers/xen/balloon.c:293: ERROR: code indent should use tabs where possible
drivers/xen/balloon.c:364: ERROR: that open brace { should be on the
2012 Apr 26
3
[help]: VPID tagged TLBs question.
Hi,
(Assume VPID is available and enabled.)
I''m trying to figure the TLB stuff with VPIDs. I understand from the
poorly written chapter in the intel manual that if an HVM vcpu is running
then only the TLBs tagged with the vcpu.VPID will be used. If xen
or a PV guest is running, then VPID 0 TLBs are what will be used.
Now I understand the hvm_asid_flush_vcpu upon new guest cr3, will
2012 Aug 29
4
xen debugger (kdb/xdb/hdb) patch for c/s 25467
Hi Guys,
Thanks for the interest in the xen hypervisor debugger, prev known as
kdb. Btw. I''m gonna rename it to xdb for xen-debugger or hdb for
hypervisor debugger. KDB is confusing people with linux kdb debugger
and I often get emails where people think they need to apply linux kdb
patch also...
Anyways, attaching patch that is cleaned up of my debug code that I
accidentally left in
2013 Sep 23
57
[PATCH RFC v13 00/20] Introduce PVH domU support
This patch series is a reworking of a series developed by Mukesh
Rathor at Oracle. The entirety of the design and development was done
by him; I have only reworked, reorganized, and simplified things in a
way that I think makes more sense. The vast majority of the credit
for this effort therefore goes to him. This version is labelled v13
because it is based on his most recent series, v11.
2008 Oct 07
6
A race condition introduced by changeset 15175: Re-init hypercall stubs page after HVM save/restore
For an SMP Linux HVM guest with PV drivers inserted, when we do save/restore (or LiveMigration) for the guest, it might panic after it''s restored.
The panic point is inside ap_suspend():
....
while (info->do_spin) {
cpu_relax();
read_lock(&suspend_lock);
HYPERVISOR_yield(); ----> guest might panic on the invocation of this function.
2009 Oct 23
11
soft lockups during live migrate..
Trying to migrate a 64bit PV guest with 64GB running medium to heavy load
on xen 3.4.0, it is showing lot of soft lockups. The softlockups are
causing dom0 reboot by the cluster FS. The hardware has 256GB and 32
CPUs.
Looking into the hypervisor thru kdb, I see one cpu in sh_resync_all()
while all other 31 appear spinning on the shadow_lock. I vaguely remember
seeing some thread on this while
2013 Nov 18
6
[PATCH RFC v2] pvh: clearly specify used parameters in vcpu_guest_context
The aim of this patch is to define a stable way in which PVH is
going to do AP bringup.
Since we are running inside of a HVM container, PVH should only need
to set flags, cr3 and user_regs in order to bring up a vCPU, the rest
can be set once the vCPU is started using the bare metal methods.
Additionally, the guest can also set cr0 and cr4, and those values
will be appended to the default values
2013 Jan 19
21
[PATCH]: PVH: specify xen features strings cleany for PVH
On Thu, 17 Jan 2013 22:22:47 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> Jan had some comments about that patch:
>
> https://patchwork.kernel.org/patch/1745041/
>
> Please fix it up so I can put it in the Linux tree.
Please see below.
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Thanks,
Mukesh
diff --git a/arch/x86/xen/xen-head.S
2012 Oct 24
7
[PATCH 4/5] xen: arm: implement remap interfaces needed for privcmd mappings.
We use XENMEM_add_to_physmap_range which is the preferred interface
for foreign mappings.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
arch/arm/include/asm/xen/interface.h | 1 +
arch/arm/xen/enlighten.c | 100 +++++++++++++++++++++++++++++++++-
arch/x86/include/asm/xen/interface.h | 1 +
include/xen/interface/memory.h | 18 ++++++
4 files changed,
2006 Feb 24
2
[PATCH][discuss] evtchn race condition
Keir,
below/attached patch is necessary to allow SVM partitions to boot
unmodified guests with xen-unstable.hg c/s 8961. c/s 8822 and 8828
(some necessary evtchn modifications) cause SVM partitions to fail with
"lost interrupt" hda error during boot.
We currently do not understand why these modifications are necessary and
in fact, a race occurs with one part of the patch (added
2012 Mar 20
5
[hybrid]: hang in update_wall_time
Hi Ian/Stefano:
I changed over to the PV clock for hybrid liked we talked at the
hackathon. I still have the hang in update_wall_time() after dom0
switches to xen as clocksource.
The source of hang seems to be in xen stime_local_stamp in cpu_time that
suddenly jumps to a large 64bit value. I''ve been chasing to figure
where that happens, and why for the hybrid and not PV. It appears the
2013 Dec 06
36
[V6 PATCH 0/7]: PVH dom0....
Hi,
V6: The only change from V5 is in patch #6:
- changed comment to reflect autoxlate
- removed a redundant ASSERT
- reworked logic a bit so that get_page_from_gfn() is called with NULL
for p2m type as before. arm has ASSERT wanting it to be NULL.
Tim: patch 4 needs your approval.
Daniel: patch 5 needs your approval.
These patches implement PVH dom0.
Patches 1 and 2
2009 Jul 17
1
c/s 18634 causing save/restore problem
This on xen 3.4.0. We discovered that following would cause a PV guest (5u3)
to panic:
- xm save guest
- xm restore guest
- xm save guest again <-- panic in _FIRST_CPU+0X0/0X1A
We''ve narrowed it down to c/s 18634. Now we''re trying to find out why that
change was made? Was that fixing some other bug?
Thanks,
Mukesh
2012 May 04
9
[hybrid]: unable to boot hvm due to eflags.ID
Hi guys,
At a loss trying to figure why
if (has_eflag(X86_EFLAGS_ID))
returns false in my HVM domU. Standard function has_eflag() in
cpucheck.c running in real mode. Works fine on PV dom0, but fails when
guest is booting on my hybrid dom0.
LMK if any ideas. I''ll keep digging in the manuals, but nothing so far.
thanks,
Mukesh
2011 Sep 01
3
DOM0 Hang on a large box....
Hi,
I''m looking at a system hang on a large box: 160 cpus, 2TB. Dom0 is
booted with 160 vcpus (don''t ask me why :)), and an HVM guest is started
with over 1.5T RAM and 128 vcpus. The system hangs without much activity
after couple hours. Xen 4.0.2 and 2.6.32 based 64bit dom0.
During hang I discovered:
Most of dom0 vcpus are in double_lock_balance spinning on one of the locks:
2009 Jan 31
2
Re: Debugging Xen via serial console
Hi,
kdb: to debug xen hypervisor, could also debug guests
gdbsx: to debug PV/HVM linux guests
The tree is : http://xenbits.xensource.com/ext/debuggers.hg
See README-dbg. You''ll need to setup serial access for kdb.
Thanks,
Mukesh
>
> Hi Dan,
>
> I''m currently using your version of ssplitd as it is. I haven''t tried
> kdb. For some reason I
2012 Oct 11
14
alloc_heap_pages is low efficient with more CPUs
I am confused with a problem:
I have a blade with 64 physical CPUs and 64G physical RAM, and defined only one VM with 1 CPU and 40G RAM.
For the first time I started the VM, it just took 3s, But for the second starting it took 30s.
After studied it by printing log, I have located a place in the hypervisor where cost too much time,
occupied 98% of the whole starting time.
xen/common/page_alloc.c
2013 Jan 30
2
[PATCH] PVH: remove code to map iomem from guest
It was decided during xen patch review that xen map the iomem
transparently, so remove xen_set_clr_mmio_pvh_pte() and the sub
hypercall PHYSDEVOP_map_iomem.
---
arch/x86/xen/mmu.c | 14 --------------
arch/x86/xen/setup.c | 16 ++++------------
include/xen/interface/physdev.h | 10 ----------
3 files changed, 4 insertions(+), 36 deletions(-)
diff --git