Displaying 9 results from an estimated 9 matches for "xc_evtchn_fd".
2013 Jan 03
20
[PATCH] Switch to poll in xenconsoled's io loop.
...truct timeval timeout;
+ int poll_timeout; /* timeout in milliseconds */
struct timespec ts;
long long now, next_timeout = 0;
- FD_ZERO(&readfds);
- FD_ZERO(&writefds);
-
- FD_SET(xs_fileno(xs), &readfds);
- max_fd = MAX(xs_fileno(xs), max_fd);
-
- if (log_hv) {
- FD_SET(xc_evtchn_fd(xce_handle), &readfds);
- max_fd = MAX(xc_evtchn_fd(xce_handle), max_fd);
- }
+#define MAX_POLL_FDS 8192
+ static struct pollfd fds[MAX_POLL_FDS];
+ static struct pollfd *fd_to_pollfd[MAX_POLL_FDS];
+ int nr_fds;
+#define SET_FDS(_fd, _events) do { \
+ if (_fd >= MAX_POLL_FDS) \...
2008 Jan 18
0
[PATCH] nicely terminate the device model script
...log.exception(exn)
try:
only in patch2:
unchanged:
--- a/tools/ioemu/target-i386-dm/helper2.c Thu Jan 17 16:22:30 2008 +0000
+++ b/tools/ioemu/target-i386-dm/helper2.c Fri Jan 18 12:42:10 2008 +0000
@@ -637,6 +637,7 @@ int main_loop(void)
int evtchn_fd = xce_handle == -1 ? -1 : xc_evtchn_fd(xce_handle);
char qemu_file[PATH_MAX];
fd_set fds;
+ int ret = 0;
buffered_io_timer = qemu_new_timer(rt_clock, handle_buffered_io,
cpu_single_env);
@@ -647,9 +648,14 @@ int main_loop(void)
xenstore_record_dm_state("running");
while (1) {
-...
2008 Sep 05
0
[PATCH] Janitorial work on xc_save.c
...int domid)
{
- int xcefd;
int rc;
rc = xc_evtchn_notify(si.xce, si.suspend_evtchn);
if (rc < 0) {
- errx(1, "failed to notify suspend request channel: %d", rc);
+ warnx("failed to notify suspend request channel: %d", rc);
return 0;
}
- xcefd = xc_evtchn_fd(si.xce);
do {
rc = xc_evtchn_pending(si.xce);
if (rc < 0) {
- errx(1, "error polling suspend notification channel: %d", rc);
+ warnx("error polling suspend notification channel: %d", rc);
return 0;
}
} while (rc != si.suspend_evtchn);
/...
2007 Oct 24
16
PATCH 0/10: Merge PV framebuffer & console into QEMU
The following series of 10 patches is a merge of the xenfb and xenconsoled
functionality into the qemu-dm code. The general approach taken is to have
qemu-dm provide two machine types - one for xen paravirt, the other for
fullyvirt. For compatability the later is the default. The goals overall
are to kill LibVNCServer, remove alot of code duplication and/or parallel
impls of the same concepts, and
2012 Dec 06
23
1000 Domains: Not able to access Domu via xm console from Dom0
Hi all,
I am running Xen 4.1.2 with ubuntu Dom0.
I have essentially got 1000 Modified Mini-OS DomU''s running at the same
time. When i try and access the 1000th domain console:
xm console DOM1000
xenconsole: could not read tty from store: No such file or directory
The domain is alive and running according to xentop, and has been for some
time.
I can successfully access the first 338
2012 Dec 06
23
1000 Domains: Not able to access Domu via xm console from Dom0
Hi all,
I am running Xen 4.1.2 with ubuntu Dom0.
I have essentially got 1000 Modified Mini-OS DomU''s running at the same
time. When i try and access the 1000th domain console:
xm console DOM1000
xenconsole: could not read tty from store: No such file or directory
The domain is alive and running according to xentop, and has been for some
time.
I can successfully access the first 338
2012 Jan 25
26
[PATCH v4 00/23] Xenstore stub domain
Changes from v3:
- mini-os configuration files moved into stubdom/
- mini-os extra console support now a config option
- Fewer #ifdefs
- grant table setup uses hypercall bounce
- Xenstore stub domain syslog support re-enabled
Changes from v2:
- configuration support added to mini-os build system
- add mini-os support for conditionally compiling frontends, xenbus
-
2010 Aug 12
59
[PATCH 00/15] RFC xen device model support
Hi all,
this is the long awaited patch series to add xen device model support in
qemu; the main author is Anthony Perard.
Developing this series we tried to come up with the cleanest possible
solution from the qemu point of view, limiting the amount of changes to
common code as much as possible. The end result still requires a couple
of hooks in piix_pci but overall the impact should be very
2013 Jul 15
6
[PATCH 0 of 6 RESEND v2] blktap3/sring: shared ring between tapdisk and the front-end
This patch series introduces the shared ring used by the front-end to pass
request descriptors to tapdisk, as well as responses from tapdisk to the
front-end. Requests from this ring end up in tapdisk''s standard request queue.
When the tapback daemon detects that the front-end tries to connect to the
back-end, it spawns a tapdisk and tells it to connect to the shared ring. The
shared