similar to: [PATCH] Support cross-bitness guest when core-dumping

Displaying 19 results from an estimated 19 matches similar to: "[PATCH] Support cross-bitness guest when core-dumping"

2007 Jan 18
13
[PATCH 0/5] dump-core take 2:
The following dump-core patches changes its format into ELF, adds PFN-GMFN table, HVM support, and adds experimental IA64 support. - ELF format Program header and note section are adopted. - HVM domain support To know the memory area to dump, XENMEM_set_memory_map is added. XENMEM_memory_map hypercall is for current domain, so new one is created. and hvm domain builder tell xen its
2013 Nov 04
17
Fwd: NetBSD xl core-dump not working... Memory fault (core dumped)
On 31.10.13 04:34, Miguel Clara wrote: > I was trying to get a core-dump for a domU with xl and got this error: > > # xl dump-core 20 test.core > Memory fault > > GDB shows this: > > a# gdb xl xl.core > GNU gdb (GDB) 7.3.1 > Copyright (C) 2011 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later<http://gnu.org/licenses/gpl.html> >
2013 Apr 25
17
[PATCH V3] libxl: write IO ABI for disk frontends
This is a patch to forward-port a Xend behaviour. Xend writes IO ABI used for all frontends. Blkfront before 2.6.26 relies on this behaviour otherwise guest cannot boot when running in 32-on-64 mode. Blkfront after 2.6.26 writes that node itself, in which case it''s just an overwrite to an existing node which should be OK. In fact Xend writes the ABI for all frontends including console
2006 Sep 18
1
Re: dumpcore changes -- [Xen-changelog] [xen-unstable] In this patch, the xc_domain_dumpcore_via_callback() in xc_core.c of
This change has the effect of adding some complexity to the callback routines. The original callback passed an opaque argument which was a private item for the use of the controlling mechanism and its callback function. This change removes this and specifies only an fd. While it''s possible for the controlling mechanism to use the fd as an index to find internal data structures, this is
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api. While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg This issue is most likely in the i915 driver and is most likely caused by the
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api. While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg This issue is most likely in the i915 driver and is most likely caused by the
2008 Mar 24
0
i386 VM on x86_64 host in Xen
>/ On Mon, 2007-12-10 at 22:34 +0000, Karanbir Singh wrote: />>/ Just wondering if people had started using i386 Xen DomU's on a x86_64 />>/ dom0 machine with 5.1 as yet ? And just wondering what their experiences />>/ have been. /I thought I would chime in here with my experience. I am running a Cent5.1 x86_64 Dom0 with a Cent 5.1 x386 domU (both fully updated).
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2019 Dec 21
0
[PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path
Remove all IOVA handling code from the non-dma_ops path in the intel iommu driver. There's no need for the non-dma_ops path to keep track of IOVAs. The whole point of the non-dma_ops path is that it allows the IOVAs to be handled separately. The IOVA handling code removed in this patch is pointless. Signed-off-by: Tom Murphy <murphyt7 at tcd.ie> --- drivers/iommu/intel-iommu.c | 89
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2007 Jan 11
0
[PATCH 6/8] HVM save restore: guest memory handling
[PATCH 6/8] HVM save restore: guest memory handling Signed-off-by: Zhai Edwin <edwin.zhai@intel.com> add support for save/restore HVM guest memory diff -r bb1c450b2739 tools/libxc/xc_hvm_restore.c --- a/tools/libxc/xc_hvm_restore.c Thu Jan 11 21:03:11 2007 +0800 +++ b/tools/libxc/xc_hvm_restore.c Thu Jan 11 21:05:45 2007 +0800 @@ -31,6 +31,40 @@ #include <xen/hvm/ioreq.h>
2011 Apr 11
0
read extended-info signature failed
Hi, I am new to the list although not really new to Xen - I would like to shoot you a quick one regarding the following message logged on xend.log: DEBUG (XendCheckpoint:200) restore:shadow=0x0, _static_max=0x201, _static_min=0x201, DEBUG (balloon:145) Balloon: 29722616 KiB free; need 525312; done. DEBUG (XendCheckpoint:217) [xc_restore]: /usr/lib64/xen/bin/xc_restore 22 2 1 2 0 0 0 INFO
2006 Oct 06
4
"xm save" works on Windows guest?
Hi, I''ve tried to do "xm save" on my Windows XP guest, which runs by ioemu. When I did this, #xm save 6 /xenimages/snapshot1 Error: /usr/lib/xen/bin/xc_save 10 19 6 0 0 0 failed xend.log: [2006-10-06 16:34:22 xend] DEBUG (XendCheckpoint:80) [xc_save]: /usr/lib/xen/bin/xc_save 10 19 6 0 0 0 [2006-10-06 16:34:23 xend] ERROR (XendCheckpoint:227) Couldn''t map
2007 Aug 13
0
[BUG] migration problem
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 first of all, thank you for a great project i really enjoy! ok, now here comes the bug report... i''m trying to test xen migration capabilities, with xen-3.1, hand compilled, fetched from mercurial repository. my setup looks like this - x86_64 dom0, x86_32p dom0, x86_32p domU. when trying to migrate from 64 to 32 bit dom0 i''m
2013 Mar 15
22
[PATCH 00/09] arm: tools: build for arm64 and enable cross-compiling for both arm32 and arm64
The following patches shave some rough edges off the tools build system to allow cross compiling for at least arm32 and arm64 based on the Debian/Ubuntu multiarch infrastructure. They also add the necessary fixes to build for arm64 (which I have only tried cross, not native). I have posted some instructions on how to compile with these patches on the wiki: