similar to: [PATCH] Make XEN_DOMCTL_destroydomain hypercall continuable.

Displaying 20 results from an estimated 800 matches similar to: "[PATCH] Make XEN_DOMCTL_destroydomain hypercall continuable."

2008 Nov 25
7
when timer go back in dom0 save and restore or migrate, PV domain hung
Hi, I find PV domin hung, When we take those steps 1, save PV domain 2, change system time of PV domain back 3, restore a PV domain or 1, migrate a PV domain from Machine A to Machine B 2, the system time of Machine B is slower than Machine A. the problem is wc_sec will be change when system-time chanaged in dom0 or restore in a
2012 Dec 12
2
[PATCH v7 1/2] xen: unify domain locking in domctl code
These two patches were originally part of the XSM series that I have posted, and remain prerequisites for that series. However, they are independent of the XSM changes and are a useful simplification regardless of the use of XSM. The Acked-bys on these patches were provided before rebasing them over the copyback changes in 26268:1b72138bddda, which had minor conflicts that I resolved. [PATCH
2011 Sep 08
5
[PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable
The following two patches allow the parallel use of memsharing, xenpaging and xen-access by using an independent ring buffer for each feature. Please review. v2: - update mem_event_check_ring arguments, check domain rather than domain_id - check ring_full first because its value was just evaluated - check if ring buffer is initialized before calling mem_access_domctl/mem_paging_domctl
2009 Jan 14
5
[PATCH] Support cross-bitness guest when core-dumping
This patch allows core-dumping to work on a cross-bit host/guest configuration, whereas previously that was not supported. It supports both PV and FV guests. The core file format generated by the host, needs to match that of the guest, so an alignment issue is addressed, along with the p2m frame list handling being done according to the guest size. Signed-off-by: Bruce Rogers
2010 Jan 09
3
101th domU fails to start with "SETVCPUCONTEXT failed"
Hello there, We (a small hosting community) are running a steadily growing number of Xen domUs on a quad dualcore Xeon server with 64GB ram. We''ve got 100 running domUs at the moment. Trying to create a new one results in this error: Error: (1, ''Internal error'', ''launch_vm: SETVCPUCONTEXT failed (rc=-1)\n'') If I shut down another domain, I can
2013 Feb 21
4
help please - running a guest from an iSCSI disk ? getting more diagnostics than "cannot make domain: -3" ? how to make domain0 "privileged" ?
Good day - This is my first post to this list , and I''m new to Xen - any help on this issue would be much appreciated . I downloaded, built and installed xen-4.2.1 (hypervisor and tools) on an x86_64 ArchLinux box updated to latest software as of today. I am trying to bring up a Linux guest from a remote iSCSI disk. The iSCSI-initiator (open-iscsi) logs in to the remote target OK and
2008 Nov 27
1
Re: RE: Re: Re: when timer go back in dom0 save and restore ormigrate, PV domain hung
F.Y.I >>> "Tian, Kevin" <kevin.tian@intel.com> 08.11.27. 11:50 >>>Sorry for a typo. I did mean domU instead of dom0. :-) The point here is that time_resume will sync to new system time and wall clock at restore, and thus pv guest should be able to continue... Xen system time is not wallclock time which just counts up from power up. As Keir points out, only its
2006 Apr 14
8
[rfc] [patch] 32/64-bit hypercall interface revisited
Last year we had a discussion[1] about how the hypercall ABI unfortunately contains fields that change width between 32- and 64-bit builds. This is a huge problem as we come up on the python management stack for ppc64, since the distributions ship 32-bit python. A 32-bit python/libxc cannot currently manage a 64-bit hypervisor. I had a patch but was unable to test it, and some other things were
2007 Jan 18
13
[PATCH 0/5] dump-core take 2:
The following dump-core patches changes its format into ELF, adds PFN-GMFN table, HVM support, and adds experimental IA64 support. - ELF format Program header and note section are adopted. - HVM domain support To know the memory area to dump, XENMEM_set_memory_map is added. XENMEM_memory_map hypercall is for current domain, so new one is created. and hvm domain builder tell xen its
2007 May 04
0
[PATCH] 3/4 "nemesis" scheduling domains for Xen
Implements tool interfaces for scheduling domains. libxenctrl, xm, and xend. signed-off-by: Mike D. Day <ncmike@us.ibm.com> -- libxc/xc_domain.c | 85 +++++++++++++++++++++++++++++++++--- libxc/xenctrl.h | 43 ++++++++++++++++-- python/xen/xend/XendDomain.py | 78 +++++++++++++++++++++++++++++++++ python/xen/xend/server/SrvDomain.py |
2009 Jan 09
5
[PATCH] Enable PCI passthrough with stub domain.
This patch enables PCI passthrough with stub domain. PCI passthrough with stub domain has failed in the past. The primary reason is that hypercalls from qemu in stub domain are rejected. This patch allows qemu in stub domain to call the hypercalls which is needed for PCI passthrough. For security, if target domain of hypercall is different from that of stub domain, it rejects hypercall. To use
2012 Jan 25
26
[PATCH v4 00/23] Xenstore stub domain
Changes from v3: - mini-os configuration files moved into stubdom/ - mini-os extra console support now a config option - Fewer #ifdefs - grant table setup uses hypercall bounce - Xenstore stub domain syslog support re-enabled Changes from v2: - configuration support added to mini-os build system - add mini-os support for conditionally compiling frontends, xenbus -
2013 Oct 16
4
[PATCH 1/7] xen: vNUMA support for PV guests
Defines XENMEM subop hypercall for PV vNUMA enabled guests and data structures that provide vNUMA topology information from per-domain vnuma topology build info. Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com> --- Changes since RFC v2: - fixed code style; - the memory copying in hypercall happens in one go for arrays; - fixed error codes logic; --- xen/common/domain.c | 10
2012 Nov 15
1
[RFC/PATCH v4] XENMEM_claim_pages (subop of existing) hypercall
This is a fourth cut of the hypervisor patch of the proposed XENMEM_claim_pages hypercall/subop, taking into account feedback from Jan and Keir and IanC, plus some fixes found via runtime debugging (using privcmd only) and some added comments/cleanup. [Logistical note: I will be out tomorrow (Friday) plus US holidays next week so my responsiveness to comments may be slower for awhile. --djm]
2011 Jan 26
9
[PATCH]vtd: Fix for irq bind failure after PCI attaching 32 times
vtd: Fix for irq bind failure after PCI attaching 32 times Originally when detaching a PCI device, pirq_to_emuirq and pirq_to_irq are freed via hypercall do_physdev_op. Now in function pt_irq_destroy_bind_vtd, duplicated logic is added to free pirq_to_emuirq, but not pirq_to_irq. This causes do_physdev_op fail to free both emuirq and irq. After attaching a PCI device for 32 times, irq resources
2012 Jan 31
26
[PATCH 00/10] FLASK updates: MSI interrupts, cleanups
This patch set adds XSM security labels to useful debugging output locations, and fixes some assumptions that all interrupts behaved like GSI interrupts (which had useful non-dynamic IDs). It also cleans up the policy build process and adds an example of how to use the user field in the security context. Debug output: [PATCH 01/10] xsm: Add security labels to event-channel dump [PATCH 02/10] xsm:
2012 Aug 10
18
[PATCH v2 0/5] ARM hypercall ABI: 64 bit ready
Hi all, this patch series makes the necessary changes to make sure that the current ARM hypercall ABI can be used as-is on 64 bit ARM platforms: - it defines xen_ulong_t as uint64_t on ARM; - it introduces a new macro to handle guest pointers, called XEN_GUEST_HANDLE_PARAM (that has size 4 bytes on aarch and is going to have size 8 bytes on aarch64); - it replaces all the occurrences of
2013 Apr 25
17
[PATCH V3] libxl: write IO ABI for disk frontends
This is a patch to forward-port a Xend behaviour. Xend writes IO ABI used for all frontends. Blkfront before 2.6.26 relies on this behaviour otherwise guest cannot boot when running in 32-on-64 mode. Blkfront after 2.6.26 writes that node itself, in which case it''s just an overwrite to an existing node which should be OK. In fact Xend writes the ABI for all frontends including console
2005 Oct 10
13
[PATCH] 0/2 VCPU creation and allocation
I''ve put together two patches. The first introduces a new dom0_op, set_max_vcpus, which with an associated variable and a check in the VCPUOP handler fixes [1]bug 288. Also included is a new VCPUOP, VCPUOP_create, which handles all of the vcpu creation tasks and leaves initialization and unpausing to VCPUOP_initialize. The separation allows for build-time allocation of vcpus which
2006 Aug 01
18
[Patch] Enable "sysrq c" handler for domU coredump
Hi, In the case of linux, crash_kexec() is occured by "sysrq c". In the case of DomainU on xen, Help is occured by "sysrq c" now. So The way of dumping DomainU''s memory manualy is nothing. I fix this issue by the following way. 1. Panic is occured by "sysrq c" on both Domain0 and DomainU. 2. On DomainU, coredump is generated in /var/xen/dump (on Domain0).