similar to: [PATCH 4/4] HVM save/restore clean up: enable 64 guest on 64 HV

Displaying 11 results from an estimated 11 matches similar to: "[PATCH 4/4] HVM save/restore clean up: enable 64 guest on 64 HV"

2012 Apr 26
3
[help]: VPID tagged TLBs question.
Hi, (Assume VPID is available and enabled.) I''m trying to figure the TLB stuff with VPIDs. I understand from the poorly written chapter in the intel manual that if an HVM vcpu is running then only the TLBs tagged with the vcpu.VPID will be used. If xen or a PV guest is running, then VPID 0 TLBs are what will be used. Now I understand the hvm_asid_flush_vcpu upon new guest cr3, will
2008 Apr 21
1
[PATCH] x86-64: emulation support for cmpxchg16b
With the x86 instruction emulator no pretty complete, I''d like to re-submit this patch to support cmpxchg16b on x86-64 and at once rename the underlying emulator callback function pointer (making clear that if implemented, it is to operate on two longs rather than two 32-bit values). At the same time it fixes an apparently wrong emulator context initialization in the shadow code.
2012 Sep 11
0
[PATCH 1/3] x86/hvm: don't use indirect calls without need
Direct calls perform better, so we should prefer them and use indirect ones only when there indeed is a need for indirection. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- a/xen/arch/x86/apic.c +++ b/xen/arch/x86/apic.c @@ -1373,7 +1373,7 @@ void error_interrupt(struct cpu_user_reg void pmu_apic_interrupt(struct cpu_user_regs *regs) { ack_APIC_irq(); -
2011 May 06
14
[PATCH 0 of 4] Use superpages on restore/migrate
This patch series restores the use of superpages when restoring or migrating a VM, while retaining efficient batching of 4k pages when superpages are not appropriate or available. Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2009 Oct 23
11
soft lockups during live migrate..
Trying to migrate a 64bit PV guest with 64GB running medium to heavy load on xen 3.4.0, it is showing lot of soft lockups. The softlockups are causing dom0 reboot by the cluster FS. The hardware has 256GB and 32 CPUs. Looking into the hypervisor thru kdb, I see one cpu in sh_resync_all() while all other 31 appear spinning on the shadow_lock. I vaguely remember seeing some thread on this while
2007 Apr 13
18
A different probklem with save/restore on C/S 14823.
I''m not seeing the problem that Fan Zhao is reporting, instead I get this one. Not sure if ti''s the same one or a different problem... This happens with my simple-guest [i.e. not using hvmloader, as I described before]. This worked fine yesterday. (XEN) event_channel.c:178:d0 EVTCHNOP failure: domain 0, error -22, line 178 (XEN) bad shared page: 0 (XEN) domain_crash_sync called
2012 Mar 01
14
[PATCH 0 of 3] RFC Paging support for AMD NPT V2
There has been some progress, but still no joy. Definitely not intended for inclusion at this point. Tim, Wei, I added a Xen command line toggle to disable IOMMU and P2M table sharing. Tim, I verified that changes to p2m-pt.c don''t break shadow mode (64bit hypervisor and Win 7 guest). Hongkaixing, I incorporated your suggestion in patch 2, so I should add your Signed-off-by eventually.
2007 Mar 20
62
RFC: [0/2] Remove netloop by lazy copying in netback
Hi Keir: These two patches remove the need for netloop by performing the copying in netback and only if it is necessary. The rationale is that most packets will be processed without delay allowing them to be freed without copying at all. So instead of copying every packet destined to dom0 we''ll only copy those that linger longer than a specified amount of time (currently 0.5s). As it
2013 Dec 06
36
[V6 PATCH 0/7]: PVH dom0....
Hi, V6: The only change from V5 is in patch #6: - changed comment to reflect autoxlate - removed a redundant ASSERT - reworked logic a bit so that get_page_from_gfn() is called with NULL for p2m type as before. arm has ASSERT wanting it to be NULL. Tim: patch 4 needs your approval. Daniel: patch 5 needs your approval. These patches implement PVH dom0. Patches 1 and 2
2010 Nov 19
5
[PATCH 1/1] Ocfs2: Teach 'coherency=full' O_DIRECT writes to correctly up_read i_alloc_sem.
Former logic of ocfs2_file_aio_write() was a bit stricky to unlock the rw_lock and i_alloc_sem, by using some private bits in struct 'iocb' to communite with ocfs2_dio_end_io(), it did work before we introduce the patch of supporting 'coherency=full,buffered' option, since rw_lock and i_alloc_sem were never acquired both at the same time, no mattar we doing buffered or direct IO or
2013 Sep 23
57
[PATCH RFC v13 00/20] Introduce PVH domU support
This patch series is a reworking of a series developed by Mukesh Rathor at Oracle. The entirety of the design and development was done by him; I have only reworked, reorganized, and simplified things in a way that I think makes more sense. The vast majority of the credit for this effort therefore goes to him. This version is labelled v13 because it is based on his most recent series, v11.