Olaf Hering
2011-Oct-05 14:10 UTC
[Xen-devel] [PATCH 0 of 3] linux-2.6.18: merge fixes from mainline
The following three backports from Konrads stable/drivers-3.2 branch implement kexec for PVonHVM guests. drivers/xen/xenbus/xenbus_probe.c | 112 +++++++++++++++++++++++--------------- drivers/xen/xenbus/xenbus_xs.c | 19 +++++- 2 files changed, 87 insertions(+), 44 deletions(-) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Oct-05 14:10 UTC
[Xen-devel] [PATCH 1 of 3] linux-2.6.18: xen/pv-on-hvm kexec: add xs_reset_watches to shutdown watches from old kernel
# HG changeset patch # User Olaf Hering <olaf@aepfle.de> # Date 1316092705 -7200 # Node ID 86e85596d64b49a9a9c07e8817c33d046b078fd4 # Parent 5bda145fc6dcda60cd125699d7ece4bbade0660f linux-2.6.18: xen/pv-on-hvm kexec: add xs_reset_watches to shutdown watches from old kernel commit ddacf5ef684a655abe2bb50c4b2a5b72ae0d5e05 Add new xs_reset_watches function to shutdown watches from old kernel after kexec boot. The old kernel does not unregister all watches in the shutdown path. They are still active, the double registration can not be detected by the new kernel. When the watches fire, unexpected events will arrive and the xenwatch thread will crash (jumps to NULL). An orderly reboot of a hvm guest will destroy the entire guest with all its resources (including the watches) before it is rebuilt from scratch, so the missing unregister is not an issue in that case. With this change the xenstored is instructed to wipe all active watches for the guest. However, a patch for xenstored is required so that it accepts the XS_RESET_WATCHES request from a client (see changeset 23839:42a45baf037d in xen-unstable.hg). Without the patch for xenstored the registration of watches will fail and some features of a PVonHVM guest are not available. The guest is still able to boot, but repeated kexec boots will fail. v5: - use xs_single instead of passing a dummy string to xs_talkv v4: - ignore -EEXIST in xs_reset_watches v3: - use XS_RESET_WATCHES instead of XS_INTRODUCE v2: - move all code which deals with XS_INTRODUCE into xs_introduce() (based on feedback from Ian Campbell) - remove casts from kvec assignment Signed-off-by: Olaf Hering <olaf@aepfle.de> diff -r 5bda145fc6dc -r 86e85596d64b drivers/xen/xenbus/xenbus_xs.c --- a/drivers/xen/xenbus/xenbus_xs.c +++ b/drivers/xen/xenbus/xenbus_xs.c @@ -627,6 +627,17 @@ static struct xenbus_watch *find_watch(c return NULL; } +#ifndef CONFIG_XEN +static void xs_reset_watches(void) +{ + int err; + + err = xs_error(xs_single(XBT_NIL, XS_RESET_WATCHES, "", NULL)); + if (err && err != -EEXIST) + printk(KERN_WARNING "xs_reset_watches failed: %d\n", err); +} +#endif + /* Register callback to watch this node. */ int register_xenbus_watch(struct xenbus_watch *watch) { @@ -928,5 +939,10 @@ int xs_init(void) if (IS_ERR(task)) return PTR_ERR(task); + /* shutdown watches for kexec boot */ +#ifndef CONFIG_XEN + xs_reset_watches(); +#endif + return 0; } _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Oct-05 14:10 UTC
[Xen-devel] [PATCH 2 of 3] linux-2.6.18: xen/pv-on-hvm kexec: prevent crash in xenwatch_thread() when stale watch events arrive
# HG changeset patch # User Olaf Hering <olaf@aepfle.de> # Date 1314289811 -7200 # Node ID 94943cf143035aa7adbec2c39c45c16a5174e3c5 # Parent 86e85596d64b49a9a9c07e8817c33d046b078fd4 linux-2.6.18: xen/pv-on-hvm kexec: prevent crash in xenwatch_thread() when stale watch events arrive commit c4c303c7c5679b4b368e12f41124aee29c325b76 During repeated kexec boots xenwatch_thread() can crash because xenbus_watch->callback is cleared by xenbus_watch_path() if a node/token combo for a new watch happens to match an already registered watch from an old kernel. In this case xs_watch returns -EEXISTS, then register_xenbus_watch() does not remove the to-be-registered watch from the list of active watches but returns the -EEXISTS to the caller anyway. Because the watch is still active in xenstored it will cause an event which will arrive in the new kernel. process_msg() will find the encapsulated struct xenbus_watch in its list of registered watches and puts the "empty" watch handle in the queue for xenwatch_thread(). xenwatch_thread() then calls ->callback which was cleared earlier by xenbus_watch_path(). To prevent that crash in a guest running on an old xen toolstack remove the special -EEXIST handling. v2: - remove the EEXIST handing in register_xenbus_watch() instead of checking for ->callback in process_msg() Signed-off-by: Olaf Hering <olaf@aepfle.de> diff -r 86e85596d64b -r 94943cf14303 drivers/xen/xenbus/xenbus_xs.c --- a/drivers/xen/xenbus/xenbus_xs.c +++ b/drivers/xen/xenbus/xenbus_xs.c @@ -656,8 +656,7 @@ int register_xenbus_watch(struct xenbus_ err = xs_watch(watch->node, token); - /* Ignore errors due to multiple registration. */ - if ((err != 0) && (err != -EEXIST)) { + if (err) { spin_lock(&watches_lock); list_del(&watch->list); spin_unlock(&watches_lock); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Oct-05 14:10 UTC
[Xen-devel] [PATCH 3 of 3] linux-2.6.18: xen/pv-on-hvm kexec+kdump: reset PV devices in kexec or crash kernel
# HG changeset patch # User Olaf Hering <olaf@aepfle.de> # Date 1314290085 -7200 # Node ID 6cbbaae4d5bed6834b5606a3b9d1a5a919d2db89 # Parent 94943cf143035aa7adbec2c39c45c16a5174e3c5 linux-2.6.18: xen/pv-on-hvm kexec+kdump: reset PV devices in kexec or crash kernel commit 116df6f004af81925dcaa90d4a3b76da6b009427 After triggering a crash dump in a HVM guest, the PV backend drivers will remain in Connected state. When the kdump kernel starts the PV drivers will skip such devices. As a result, no root device is found and the vmcore cant be saved. A similar situation happens after a kexec boot, here the devices will be in the Closed state. With this change all frontend devices with state XenbusStateConnected or XenbusStateClosed will be reset by changing the state file to Closing -> Closed -> Initializing. This will trigger a disconnect in the backend drivers. Now the frontend drivers will find the backend drivers in state Initwait and can connect. v2: - add timeout when waiting for backend state change (based on feedback from Ian Campell) - extent printk message to include backend string - add comment to fall-through case in xenbus_reset_frontend Signed-off-by: Olaf Hering <olaf@aepfle.de> diff -r 94943cf14303 -r 6cbbaae4d5be drivers/xen/xenbus/xenbus_probe.c --- a/drivers/xen/xenbus/xenbus_probe.c +++ b/drivers/xen/xenbus/xenbus_probe.c @@ -859,72 +859,100 @@ void unregister_xenstore_notifier(struct EXPORT_SYMBOL_GPL(unregister_xenstore_notifier); #endif -#ifdef CONFIG_CRASH_DUMP -static DECLARE_WAIT_QUEUE_HEAD(be_state_wq); -static int be_state; +#ifndef CONFIG_XEN +static DECLARE_WAIT_QUEUE_HEAD(backend_state_wq); +static int backend_state; -static void xenbus_reset_state_changed(struct xenbus_watch *w, const char **v, unsigned int l) +static void xenbus_reset_backend_state_changed(struct xenbus_watch *w, + const char **v, unsigned int l) { - if (xenbus_scanf(XBT_NIL, v[XS_WATCH_PATH], "", "%i", &be_state) != 1) - be_state = XenbusStateUnknown; - printk(KERN_INFO "XENBUS: %s %s\n", v[XS_WATCH_PATH], xenbus_strstate(be_state)); - wake_up(&be_state_wq); + if (xenbus_scanf(XBT_NIL, v[XS_WATCH_PATH], "", "%i", &backend_state) != 1) + backend_state = XenbusStateUnknown; + printk(KERN_DEBUG "XENBUS: backend %s %s\n", + v[XS_WATCH_PATH], xenbus_strstate(backend_state)); + wake_up(&backend_state_wq); } -static int xenbus_reset_check_final(int *st) +static void xenbus_reset_wait_for_backend(char *be, int expected) { - return *st == XenbusStateInitialising || *st == XenbusStateInitWait; + long timeout; + timeout = wait_event_interruptible_timeout(backend_state_wq, + backend_state == expected, 5 * HZ); + if (timeout <= 0) + printk(KERN_INFO "XENBUS: backend %s timed out.\n", be); } -static void xenbus_reset_frontend_state(char *backend, char *frontend) +/* + * Reset frontend if it is in Connected or Closed state. + * Wait for backend to catch up. + * State Connected happens during kdump, Closed after kexec. + */ +static void xenbus_reset_frontend(char *fe, char *be, int be_state) { - struct xenbus_watch watch; + struct xenbus_watch be_watch; - memset(&watch, 0, sizeof(watch)); - watch.node = kasprintf(GFP_NOIO | __GFP_HIGH, "%s/state", backend); - if (!watch.node) + printk(KERN_DEBUG "XENBUS: backend %s %s\n", + be, xenbus_strstate(be_state)); + + memset(&be_watch, 0, sizeof(be_watch)); + be_watch.node = kasprintf(GFP_NOIO | __GFP_HIGH, "%s/state", be); + if (!be_watch.node) return; - watch.callback = xenbus_reset_state_changed; - be_state = XenbusStateUnknown; + be_watch.callback = xenbus_reset_backend_state_changed; + backend_state = XenbusStateUnknown; - printk(KERN_INFO "XENBUS: triggering reconnect on %s\n", backend); - register_xenbus_watch(&watch); + printk(KERN_INFO "XENBUS: triggering reconnect on %s\n", be); + register_xenbus_watch(&be_watch); - xenbus_printf(XBT_NIL, frontend, "state", "%d", XenbusStateClosing); - wait_event_interruptible(be_state_wq, be_state == XenbusStateClosing); + /* fall through to forward backend to state XenbusStateInitialising */ + switch (be_state) { + case XenbusStateConnected: + xenbus_printf(XBT_NIL, fe, "state", "%d", XenbusStateClosing); + xenbus_reset_wait_for_backend(be, XenbusStateClosing); - xenbus_printf(XBT_NIL, frontend, "state", "%d", XenbusStateClosed); - wait_event_interruptible(be_state_wq, be_state == XenbusStateClosed); + case XenbusStateClosing: + xenbus_printf(XBT_NIL, fe, "state", "%d", XenbusStateClosed); + xenbus_reset_wait_for_backend(be, XenbusStateClosed); - xenbus_printf(XBT_NIL, frontend, "state", "%d", XenbusStateInitialising); - wait_event_interruptible(be_state_wq, xenbus_reset_check_final(&be_state)); + case XenbusStateClosed: + xenbus_printf(XBT_NIL, fe, "state", "%d", XenbusStateInitialising); + xenbus_reset_wait_for_backend(be, XenbusStateInitWait); + } - unregister_xenbus_watch(&watch); - printk(KERN_INFO "XENBUS: reconnect done on %s\n", backend); - kfree(watch.node); + unregister_xenbus_watch(&be_watch); + printk(KERN_INFO "XENBUS: reconnect done on %s\n", be); + kfree(be_watch.node); } -static void xenbus_reset_check_state(char *class, char *dev) +static void xenbus_check_frontend(char *class, char *dev) { - int state, err; + int be_state, fe_state, err; char *backend, *frontend; frontend = kasprintf(GFP_NOIO | __GFP_HIGH, "device/%s/%s", class, dev); if (!frontend) return; - err = xenbus_scanf(XBT_NIL, frontend, "state", "%i", &state); - /* frontend connected? */ - if (err == 1 && state == XenbusStateConnected) { + err = xenbus_scanf(XBT_NIL, frontend, "state", "%i", &fe_state); + if (err != 1) + goto out; + + switch (fe_state) { + case XenbusStateConnected: + case XenbusStateClosed: + printk(KERN_DEBUG "XENBUS: frontend %s %s\n", + frontend, xenbus_strstate(fe_state)); backend = xenbus_read(XBT_NIL, frontend, "backend", NULL); if (!backend || IS_ERR(backend)) goto out; - err = xenbus_scanf(XBT_NIL, backend, "state", "%i", &state); - /* backend connected? */ - if (err == 1 && state == XenbusStateConnected) - xenbus_reset_frontend_state(backend, frontend); + err = xenbus_scanf(XBT_NIL, backend, "state", "%i", &be_state); + if (err == 1) + xenbus_reset_frontend(frontend, backend, be_state); kfree(backend); + break; + default: + break; } out: kfree(frontend); @@ -945,7 +973,7 @@ static void xenbus_reset_state(void) if (IS_ERR(dev)) continue; for (j = 0; j < dev_n; j++) - xenbus_reset_check_state(devclass[i], dev[j]); + xenbus_check_frontend(devclass[i], dev[j]); kfree(dev); } kfree(devclass); @@ -956,11 +984,11 @@ void xenbus_probe(void *unused) { BUG_ON(!is_xenstored_ready()); -#ifdef CONFIG_CRASH_DUMP - /* reset devices in XenbusStateConnected state */ - if (!is_initial_xendomain() && reset_devices) - xenbus_reset_state(); +#ifndef CONFIG_XEN + /* reset devices in Connected or Closed state */ + xenbus_reset_state(); #endif + /* Enumerate devices in xenstore and watch for changes. */ xenbus_probe_devices(&xenbus_frontend); register_xenbus_watch(&fe_watch); _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jan Beulich
2011-Oct-06 08:54 UTC
Re: [Xen-devel] [PATCH 2 of 3] linux-2.6.18: xen/pv-on-hvm kexec: prevent crash in xenwatch_thread() when stale watch events arrive
>>> On 05.10.11 at 16:10, Olaf Hering <olaf@aepfle.de> wrote: > linux-2.6.18: xen/pv-on-hvm kexec: prevent crash in xenwatch_thread() when > stale watch events arrive > > commit c4c303c7c5679b4b368e12f41124aee29c325b76 > > During repeated kexec boots xenwatch_thread() can crash because > xenbus_watch->callback is cleared by xenbus_watch_path() if a node/token > combo for a new watch happens to match an already registered watch from > an old kernel. In this case xs_watch returns -EEXISTS, then > register_xenbus_watch() does not remove the to-be-registered watch from > the list of active watches but returns the -EEXISTS to the caller > anyway. > > Because the watch is still active in xenstored it will cause an event > which will arrive in the new kernel. process_msg() will find the > encapsulated struct xenbus_watch in its list of registered watches and > puts the "empty" watch handle in the queue for xenwatch_thread(). > xenwatch_thread() then calls ->callback which was cleared earlier by > xenbus_watch_path(). > > To prevent that crash in a guest running on an old xen toolstack remove > the special -EEXIST handling. > > v2: > - remove the EEXIST handing in register_xenbus_watch() instead of > checking for ->callback in process_msg() > > Signed-off-by: Olaf Hering <olaf@aepfle.de> > > diff -r 86e85596d64b -r 94943cf14303 drivers/xen/xenbus/xenbus_xs.c > --- a/drivers/xen/xenbus/xenbus_xs.c > +++ b/drivers/xen/xenbus/xenbus_xs.c > @@ -656,8 +656,7 @@ int register_xenbus_watch(struct xenbus_ > > err = xs_watch(watch->node, token); > > - /* Ignore errors due to multiple registration. */ > - if ((err != 0) && (err != -EEXIST)) { > + if (err) {While I committed the other two patches in this series, this one seems to have the potential for regressions (the comment and the checking for -EEXIST can be assumed to have been there for a reason - whether they became stale by now is not obvious), so I''d like to double check that you verified that there''s no code path where register_xenbus_watch() could be called twice for the same watch. One group of cases of concern are the watches registered from xenstore notifiers - these appears to be safe, but the fact that they get called just once is only implicitly derivable walking through the code. And that may break the moment xenstore becomes a restartable entity. The other possibly problematic case is that of watches user mode can register through writing the xenbus device: Here the patch definitely changes behavior observable by user mode (a re-registration does not cancel an existing watch without this change). Jan> spin_lock(&watches_lock); > list_del(&watch->list); > spin_unlock(&watches_lock);_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Oct-06 09:01 UTC
Re: [Xen-devel] [PATCH 2 of 3] linux-2.6.18: xen/pv-on-hvm kexec: prevent crash in xenwatch_thread() when stale watch events arrive
On Thu, 2011-10-06 at 09:54 +0100, Jan Beulich wrote:> >>> On 05.10.11 at 16:10, Olaf Hering <olaf@aepfle.de> wrote: > > linux-2.6.18: xen/pv-on-hvm kexec: prevent crash in xenwatch_thread() when > > stale watch events arrive > > > > commit c4c303c7c5679b4b368e12f41124aee29c325b76 > > > > During repeated kexec boots xenwatch_thread() can crash because > > xenbus_watch->callback is cleared by xenbus_watch_path() if a node/token > > combo for a new watch happens to match an already registered watch from > > an old kernel. In this case xs_watch returns -EEXISTS, then > > register_xenbus_watch() does not remove the to-be-registered watch from > > the list of active watches but returns the -EEXISTS to the caller > > anyway. > > > > Because the watch is still active in xenstored it will cause an event > > which will arrive in the new kernel. process_msg() will find the > > encapsulated struct xenbus_watch in its list of registered watches and > > puts the "empty" watch handle in the queue for xenwatch_thread(). > > xenwatch_thread() then calls ->callback which was cleared earlier by > > xenbus_watch_path(). > > > > To prevent that crash in a guest running on an old xen toolstack remove > > the special -EEXIST handling. > > > > v2: > > - remove the EEXIST handing in register_xenbus_watch() instead of > > checking for ->callback in process_msg() > > > > Signed-off-by: Olaf Hering <olaf@aepfle.de> > > > > diff -r 86e85596d64b -r 94943cf14303 drivers/xen/xenbus/xenbus_xs.c > > --- a/drivers/xen/xenbus/xenbus_xs.c > > +++ b/drivers/xen/xenbus/xenbus_xs.c > > @@ -656,8 +656,7 @@ int register_xenbus_watch(struct xenbus_ > > > > err = xs_watch(watch->node, token); > > > > - /* Ignore errors due to multiple registration. */ > > - if ((err != 0) && (err != -EEXIST)) { > > + if (err) { > > While I committed the other two patches in this series, this one seems > to have the potential for regressions (the comment and the checking for > -EEXIST can be assumed to have been there for a reason - whether > they became stale by now is not obvious),Keir said earlier it wasn''t correct: http://marc.info/?l=xen-devel&m=131358786516831&w=2 Ian.> so I''d like to double check > that you verified that there''s no code path where > register_xenbus_watch() could be called twice for the same watch. > > One group of cases of concern are the watches registered from > xenstore notifiers - these appears to be safe, but the fact that they > get called just once is only implicitly derivable walking through the > code. And that may break the moment xenstore becomes a restartable > entity. > > The other possibly problematic case is that of watches user mode > can register through writing the xenbus device: Here the patch > definitely changes behavior observable by user mode (a > re-registration does not cancel an existing watch without this > change). > > Jan > > > spin_lock(&watches_lock); > > list_del(&watch->list); > > spin_unlock(&watches_lock); > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jan Beulich
2011-Oct-06 09:17 UTC
Re: [Xen-devel] [PATCH 2 of 3] linux-2.6.18: xen/pv-on-hvm kexec: prevent crash in xenwatch_thread() when stale watch events arrive
>>> On 06.10.11 at 11:01, Ian Campbell <Ian.Campbell@citrix.com> wrote: > On Thu, 2011-10-06 at 09:54 +0100, Jan Beulich wrote: >> >>> On 05.10.11 at 16:10, Olaf Hering <olaf@aepfle.de> wrote: >> > linux-2.6.18: xen/pv-on-hvm kexec: prevent crash in xenwatch_thread() when >> > stale watch events arrive >> > >> > commit c4c303c7c5679b4b368e12f41124aee29c325b76 >> > >> > During repeated kexec boots xenwatch_thread() can crash because >> > xenbus_watch->callback is cleared by xenbus_watch_path() if a node/token >> > combo for a new watch happens to match an already registered watch from >> > an old kernel. In this case xs_watch returns -EEXISTS, then >> > register_xenbus_watch() does not remove the to-be-registered watch from >> > the list of active watches but returns the -EEXISTS to the caller >> > anyway. >> > >> > Because the watch is still active in xenstored it will cause an event >> > which will arrive in the new kernel. process_msg() will find the >> > encapsulated struct xenbus_watch in its list of registered watches and >> > puts the "empty" watch handle in the queue for xenwatch_thread(). >> > xenwatch_thread() then calls ->callback which was cleared earlier by >> > xenbus_watch_path(). >> > >> > To prevent that crash in a guest running on an old xen toolstack remove >> > the special -EEXIST handling. >> > >> > v2: >> > - remove the EEXIST handing in register_xenbus_watch() instead of >> > checking for ->callback in process_msg() >> > >> > Signed-off-by: Olaf Hering <olaf@aepfle.de> >> > >> > diff -r 86e85596d64b -r 94943cf14303 drivers/xen/xenbus/xenbus_xs.c >> > --- a/drivers/xen/xenbus/xenbus_xs.c >> > +++ b/drivers/xen/xenbus/xenbus_xs.c >> > @@ -656,8 +656,7 @@ int register_xenbus_watch(struct xenbus_ >> > >> > err = xs_watch(watch->node, token); >> > >> > - /* Ignore errors due to multiple registration. */ >> > - if ((err != 0) && (err != -EEXIST)) { >> > + if (err) { >> >> While I committed the other two patches in this series, this one seems >> to have the potential for regressions (the comment and the checking for >> -EEXIST can be assumed to have been there for a reason - whether >> they became stale by now is not obvious), > > Keir said earlier it wasn''t correct: > http://marc.info/?l=xen-devel&m=131358786516831&w=2Quoting him: "Either remove the EEXIST check, or convert EEXIST to return code 0 in register_xenbus_watch(). You could do either, since I''m sure I added the EEXIST check only as an attempt to theoretically robustify that function, and looks like I got it wrong." Removing the check (as Olaf did) doesn''t deal with the xenbus device case mentioned above. Converting it to zero would make sense (as the watch asked for is actually registered after the function returns), but would get Olaf''s problem addressed afaict. But wait - the xenbus device case is different because the watch structure gets allocated, so if duplicate detection is really based on the address of the watch structure (i.e. the token produced from the address), this wouldn''t be an issue then. Further I only now notice that there''s a list_add() of the watch structure prior to the call to xs_watch() - if a watch was registered twice, this would lead to a corrupted list. So with that it looks like it''s indeed pointless to handle the -EEXIST case separately. Thanks, Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Olaf Hering
2011-Oct-06 09:23 UTC
Re: [Xen-devel] [PATCH 2 of 3] linux-2.6.18: xen/pv-on-hvm kexec: prevent crash in xenwatch_thread() when stale watch events arrive
On Thu, Oct 06, Jan Beulich wrote:> While I committed the other two patches in this series, this one seems > to have the potential for regressions (the comment and the checking for > -EEXIST can be assumed to have been there for a reason - whether > they became stale by now is not obvious), so I''d like to double check > that you verified that there''s no code path where > register_xenbus_watch() could be called twice for the same watch.Would there be any benefit from allowing such a second watch where path and token match? In the end only one watcher will receive the event. With this change such code bug will be found.> The other possibly problematic case is that of watches user mode > can register through writing the xenbus device: Here the patch > definitely changes behavior observable by user mode (a > re-registration does not cancel an existing watch without this > change).I did not know watches can be registered by domU userspace, and I therfore did not consider this case. Looking through xenbus_dev_write I think the patch does not change behaviour because the requested watch-path may be the same, but the generated token passed to xenstored will change each time a watch is requested because the watch struct is allocated with kmalloc. Olaf _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel