similar to: [PATCH 0/5] dump-core take 2:

Displaying 20 results from an estimated 300 matches similar to: "[PATCH 0/5] dump-core take 2:"

2009 Jan 14
5
[PATCH] Support cross-bitness guest when core-dumping
This patch allows core-dumping to work on a cross-bit host/guest configuration, whereas previously that was not supported. It supports both PV and FV guests. The core file format generated by the host, needs to match that of the guest, so an alignment issue is addressed, along with the p2m frame list handling being done according to the guest size. Signed-off-by: Bruce Rogers
2013 Nov 04
17
Fwd: NetBSD xl core-dump not working... Memory fault (core dumped)
On 31.10.13 04:34, Miguel Clara wrote: > I was trying to get a core-dump for a domU with xl and got this error: > > # xl dump-core 20 test.core > Memory fault > > GDB shows this: > > a# gdb xl xl.core > GNU gdb (GDB) 7.3.1 > Copyright (C) 2011 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later<http://gnu.org/licenses/gpl.html> >
2006 Sep 18
1
Re: dumpcore changes -- [Xen-changelog] [xen-unstable] In this patch, the xc_domain_dumpcore_via_callback() in xc_core.c of
This change has the effect of adding some complexity to the callback routines. The original callback passed an opaque argument which was a private item for the use of the controlling mechanism and its callback function. This change removes this and specifies only an fd. While it''s possible for the controlling mechanism to use the fd as an index to find internal data structures, this is
2012 Sep 14
1
[PATCH] xenpm: make argument parsing and error handling more consistent
Specifically, what values are or aren''t accepted as CPU identifier, and how the values get interpreted should be consistent across sub-commands (intended behavior now: non-negative values are okay, and along with omitting the argument, specifying "all" will also be accepted). For error handling, error messages should get consistently issued to stderr, and the tool should now
2006 Apr 14
8
[rfc] [patch] 32/64-bit hypercall interface revisited
Last year we had a discussion[1] about how the hypercall ABI unfortunately contains fields that change width between 32- and 64-bit builds. This is a huge problem as we come up on the python management stack for ppc64, since the distributions ship 32-bit python. A 32-bit python/libxc cannot currently manage a 64-bit hypervisor. I had a patch but was unable to test it, and some other things were
2007 Aug 28
6
[PATCH] Make XEN_DOMCTL_destroydomain hypercall continuable.
# HG changeset patch # User yamahata@valinux.co.jp # Date 1188274001 -32400 # Node ID 2c9db26f1d0e0fdd4757d76a67f4b37ba0e40351 # Parent 58d131f1fb35977ff2d8682f553391c8a866d52c Make XEN_DOMCTL_destroydomain hypercall continuable. XEN_DOMCTL_destroydomain hypercall frees domain resources, especially it frees all pages of the domain. When domain memory is very large, it takes too long resulting in
2016 Jul 13
6
[PATCH 0/5] Fix SELinux
We can use the setfiles(8) command to relabel the guest filesystem, even though we don't have a policy loaded nor SELinux enabled in the appliance kernel. This also deprecates or removes the old and broken SELinux support. This patch isn't quite complete - I would like to add some tests to the new API. I'm posting here to garner early feedback. Rich.
2010 Dec 02
1
Making a hypercall in DomU
Hi, I have *implemented a new hypercall* and it is working fine when tested from the Dom0 user-space. I want to invoke this hypercall from DomU user-space. I copied all the /usr/lib/libxen* and /usr/include/xen* (recursively) to the DomU Here''s the code I wrote to invoke hypercall: #include <stdio.h> #include <xenctrl.h> int main(void){ int xc_handle, rc;
2008 Mar 07
6
where is the location of definition of "do_xen_version"?
hi, my friends: Currently, i am studying the way of hypercall's implementation. i have already known the flow of hypercall's execuation, and i decided to add a new hypercall into the Xen. first, i want to know the detail of one hypercall function, for example, "do_xen_version", but i can not find the location of definition of "do_xen_version". who can help me? i have
2011 Dec 21
2
[PATCH] xenpm: assorted adjustments
- use consistent error values (stop mixing of [positive] errno values with literal -E... ones) - properly format output - don''t use leading zeros in decimal output - move printing of average frequency into P-state conditional (rather than a C-state one) - don''t print some C-state related info when CPU idle management is disabled in the hypervisor - use calloc() for array
2016 Jul 14
10
[PATCH v2 0/7] Fix SELinux
v1 -> v2: - Add simple test of the setfiles API. - Use SELinux_relabel module in virt-v2v (instead of touch /.autorelabel). - Small fixes. Rich.
2006 Sep 25
1
nlme with a factor in R 2.4.0beta
Hi, the following R lines work fine in R 2.4.0 alpha (and older R versions), but not in R 2.4.0 beta (details below): library(drc) # to load the dataset 'PestSci' library(nlme) ## Starting values sv <- c(0.328919, 1.956121, 0.097547, 1.642436, 0.208924) ## No error m1 <- nlme(SLOPE ~ c + (d-c)/(1+exp(b*(log(DOSE)-log(e)))), fixed =
2007 Feb 26
2
[PATCH 0 of 2] Parse image elfnotes, write them to xenstore, save and load via image sxpr
Here are two patches that let xm create, save and restore extract and preserve elfnotes read by the domain builder. This is handy for a few things. In particular, I''d like it so that xm can decide whether or not guest domains support fast resume (if save fails, or for checkpointing). _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com
2004 Oct 18
3
potential bug in "xm atropos" implementation
>From tools/libxc/xc_atropos.c: int xc_atropos_domain_set(int xc_handle, u32 domid, u64 period, u64 slice, u64 latency, int xtratime) which takes 6 arguments >From tools/python/xen/xm/main.py: class ProgAtropos(Prog): <snip> def main(self, args): if len(args) != 5: self.err("%s: Invalid argument(s)" % args[0])
2012 Jan 31
26
[PATCH 00/10] FLASK updates: MSI interrupts, cleanups
This patch set adds XSM security labels to useful debugging output locations, and fixes some assumptions that all interrupts behaved like GSI interrupts (which had useful non-dynamic IDs). It also cleans up the policy build process and adds an example of how to use the user field in the security context. Debug output: [PATCH 01/10] xsm: Add security labels to event-channel dump [PATCH 02/10] xsm:
2011 Nov 27
5
[PATCH] qemu-xen: Intel GPU passthrough, fix OpRegion mapping.
The OpRegion shouldn''t be mapped 1:1 because the address in the host can''t be used in the guest directly. This patch traps read and write access to the opregion of the Intel GPU config space (offset 0xfc). To work correctly this patch needs a change in hvmloader. HVMloader will allocate 2 pages for the OpRegion and write this address on the config space of the Intel GPU. Qemu
2011 Jul 21
51
Linux Stubdom Problem
2011/7/19 Stefano Stabellini <stefano.stabellini@eu.citrix.com>: > CC''ing Tim and xen-devel > > On Mon, 18 Jul 2011, Jiageng Yu wrote: >> 2011/7/16 Stefano Stabellini <stefano.stabellini@eu.citrix.com>: >> > On Fri, 15 Jul 2011, Jiageng Yu wrote: >> >> 2011/7/15 Jiageng Yu <yujiageng734@gmail.com>: >> >> > 2011/7/15
2008 May 24
2
Use of XEN_GUEST_HANDLE
Hi all, I have recently started to go through xen source. I want to know the usage of XEN_GUEST_HANDLE. I see thats its just a macro which will prefix each data type with __guest_handle_ . And DEFINE_XEN_GUEST_HANDLE(name) will just typedefs ''__guest_handle_name'' to be a pointer to a data type ''name '' . What is the reason for such abstraction? And how
2006 Sep 29
4
[PATCH 4/6] xen: export NUMA topology in physinfo hcall
This patch modifies the physinfo hcall to export NUMA CPU and Memory topology information. The new physinfo hcall is integrated into libxc and xend (xm info specifically). Included in this patch is a minor tweak to xm-test''s xm info testcase. The new fields in xm info are: nr_nodes : 4 mem_chunks : node0:0x0000000000000000-0x0000000190000000
2012 Jul 23
2
[PATCH V2] qemu-xen-traditionnal, Fix dirty logging during migration.
This moves the xen_modified_memory call from cpu_physical_memory_map to cpu_physical_memory_unmap because the memory could be migrated before the device model have written to it. But because we need to know the guest address and to avoid rewriting a new function, the call is moved to qemu_invalidate_entry. So this later has to new parameters, the length of the mapping and if it was a write.