search for: launch_t

Displaying 20 results from an estimated 20 matches for "launch_t".

2016 Apr 14
0
[PATCH] Add safe wrapper around waitpid which deals with EINTR correctly.
...ta->recoverypid > 0) waitpid (data->recoverypid, NULL, 0); + if (data->pid > 0) guestfs_int_waitpid_noerror (data->pid); + if (data->recoverypid > 0) guestfs_int_waitpid_noerror (data->recoverypid); data->pid = 0; data->recoverypid = 0; memset (&g->launch_t, 0, sizeof g->launch_t); @@ -1484,16 +1483,14 @@ shutdown_direct (guestfs_h *g, void *datav, int check_for_errors) /* Wait for subprocess(es) to exit. */ if (g->recovery_proc /* RHBZ#998482 */ && data->pid > 0) { - if (waitpid (data->pid, &status, 0) == -1) { -...
2013 Mar 07
4
[PATCH 0/4] Small refactorings of the protocol layer.
As the start of work to add remote support, I'm taking a close look at the protocol layer in the library. These are some small cleanups. Rich.
2016 Apr 14
2
[PATCH] Add safe wrapper around waitpid which deals with EINTR correctly.
As Eric Blake noted in: https://www.redhat.com/archives/libguestfs/2016-April/msg00154.html libguestfs doesn't correctly handle the case where waitpid receives a SIGCHLD signal and the main program has registered a non-restartable signal handler. In this case waitpid would return -EINTR and we would print an error, but actually we should retry this case. This adds two new internal functions,
2017 Apr 28
0
[PATCH] launch: Error if you try to launch with too many drives.
...*/ + r = guestfs_max_disks (g); + if (r == -1) + return -1; + if (g->nr_drives > (size_t) r) { + error (g, _("too many drives have been added, the current backend only supports %d drives"), r); + return -1; + } + /* Start the clock ... */ gettimeofday (&g->launch_t, NULL); TRACE0 (launch_start); -- 2.9.3
2017 May 02
2
[PATCH v2] launch: Error if you try to launch with too many drives.
v1 was here: https://www.redhat.com/archives/libguestfs/2017-April/msg00268.html v1 broke some tests because the guestfs_max_disks API isn't supported by some backends, specifically ?unix:?. This makes failure of guestfs_max_disks non-fatal. Rich.
2017 May 02
0
[PATCH v2] launch: Error if you try to launch with too many drives.
...disks (g); + guestfs_pop_error_handler (g); + if (r >= 0 && g->nr_drives > (size_t) r) { + error (g, _("too many drives have been added, the current backend only supports %d drives"), r); + return -1; + } + /* Start the clock ... */ gettimeofday (&g->launch_t, NULL); TRACE0 (launch_start); -- 2.9.3
2013 Feb 18
4
[PATCH for discussion only 0/3] Implement mutexes to limit number of concurrent instances of libguestfs.
These three patches (for discussion only, NOT to be applied) implement a mutex system that lets the user limit the number of libguestfs instances that can be launched per host. There are two uses that I have identified for this: firstly so we can enable parallel-tests (the default in automake >= 1.13) without blowing up the host. Secondly oVirt has raised concerns about how to limit the
2013 Aug 09
4
[PATCH v2 0/4] Experimental User-Mode Linux backend.
v1 was here: https://www.redhat.com/archives/libguestfs/2013-August/msg00005.html This now works, to some extent. The main problem now is that devices are named /dev/ubd[a-] which of course confuses everything. I'm thinking it may be easier to add a udev rule to rename them. Rich.
2013 Aug 09
5
[PATCH 0/4] Not quite working User-Mode Linux backend.
This is a User-Mode Linux backend for libguestfs. You can select it by doing: export LIBGUESTFS_BACKEND=uml export LIBGUESTFS_QEMU=/path/to/vmlinux Note we're reusing the 'qemu' variable in the handle for convenience. QEmu is not involved when using the UML backend. This almost works. UML itself crashes when the daemon tries to connect to the serial port. I suspect it's
2010 Jul 05
5
[PATCH 0/3] RFC: Allow use of external QEMU process with libguestfs
This attempts to implement the idea proposed in https://www.redhat.com/archives/libguestfs/2010-April/msg00087.html The idea is that an externally managed QEMU (manual, or via libvirt) can boot the appliance kernel/initrd. libguestfs can then be just told of the UNIX domain socket associated with the guest daemon. An example based on guestfish. 1. Step one, find the appliance kernel/initrd
2016 Mar 22
0
[PATCH v3 09/11] launch: Remove guestfs_int_print_timestamped_message function.
...char *msg; - int err; - struct timeval tv; - - va_start (args, fs); - err = vasprintf (&msg, fs, args); - va_end (args); - - if (err < 0) return; - - gettimeofday (&tv, NULL); - - debug (g, "[%05" PRIi64 "ms] %s", - guestfs_int_timeval_diff (&g->launch_t, &tv), msg); - - free (msg); -} - /* Compute Y - X and return the result in milliseconds. * Approximately the same as this code: * http://www.mpp.mpg.de/~huber/util/timevaldiff.c -- 2.7.4
2013 Mar 07
3
[PATCH 0/3] protocol: Abstract out socket operations.
I've been taking a long hard look at the protocol layer. It has evolved over a long time without any particular direction, and the result is, to say the least, not very organized. These patches take a first step at cleaning up the mess by abstracting out socket operations from the rest of the code. The purpose of this is to allow us to slot in a different connection layer under the
2011 Mar 10
1
[PATCH for discussion only] New event API (RHBZ#664558).
...rogress messages it sends (see C<daemon/proto.c:notify_progress>). Not all calls generate diff --git a/src/proto.c b/src/proto.c index 549734b..6a0fbbf 100644 --- a/src/proto.c +++ b/src/proto.c @@ -193,8 +193,7 @@ child_cleanup (guestfs_h *g) g->recoverypid = 0; memset (&g->launch_t, 0, sizeof g->launch_t); g->state = CONFIG; - if (g->subprocess_quit_cb) - g->subprocess_quit_cb (g, g->subprocess_quit_cb_data); + guestfs___call_callbacks_void (g, GUESTFS_EVENT_SUBPROCESS_QUIT); } static int @@ -237,13 +236,8 @@ read_log_message_or_eof (guestfs_h *g, i...
2016 May 18
2
[PATCH v2 0/2] lib: qemu: Memoize qemu feature detection.
v1 -> v2: - Rebase on top of Pino's version work. Two patches went upstream, these are the two remaining patches. Note the generation number is still inside the qemu.stat file. We could put it in the filename, I have no particular preference. Rich.
2015 Oct 16
2
[PATCH v6 0/2] RFE: journal reader in guestfish
Output is configurable, it's the same format as virt-log has, since both uses same code. First patch moves get_journal_field around and renames it to journal_view and the next one reimplements it a bit and brings it to guestfish. Maros Zatko (2): cat: move get_journal_field to fish/journal.c fish: add journal-view command (RHBZ#988100) .gnulib | 2 +-
2015 Feb 14
2
[PATCH 0/2] Change guestfs__*
libguestfs has used double and triple underscores in identifiers. These aren't valid for global names in C++. (http://stackoverflow.com/a/228797) These large but completely mechanical patches change the illegal identifiers to legal ones. Rich.
2016 May 12
7
[PATCH 0/4] lib: qemu: Memoize qemu feature detection.
Doing qemu feature detection in the direct backend takes ~100ms because we need to run `qemu -help' and `qemu -devices ?', and each of those interacts with glibc's very slow link loader. Fixing the link loader is really hard. Instead memoize the output of those two commands. This patch series first separates all the code dealing with qemu into a separate module (src/qemu.c) and
2017 Apr 27
4
[PATCH 0/4] common: Add a simple mini-library for handling qemu command and config files.
Currently we have an OCaml library for generating the qemu command line (used only by ‘virt-v2v -o qemu’). However we also generate a qemu command line in ‘lib/launch-direct.c’, and we might in future need to generate a ‘-readconfig’-compatible configuration file if we want to go beyond 10,000 drives for scalability testing. Therefore this patch series reimplements the qemu command line code as
2016 Mar 22
19
[PATCH v3 0/11] tests/qemu: Add program for tracing and analyzing boot times.
Lots of changes since v2, too much to remember or summarize. Please ignore patch 11/11, it's just for my testing. Rich.
2016 Mar 20
14
[PATCH v2 0/7] tests/qemu: Add program for tracing and analyzing boot times.
v1 was here: https://www.redhat.com/archives/libguestfs/2016-March/thread.html#00157 Not running the 'hwclock' command reduces boot times considerably. However I'm not sure if it is safe. See the question I posted on qemu-devel: http://thread.gmane.org/gmane.comp.emulators.qemu/402194 At the moment, about 50% of the time is consumed by SeaBIOS. Of this, about ⅓rd is SGABIOS