James Harper
2009-Oct-08 11:01 UTC
[Xen-devel] xenstore ring overflow when too many watches are fired
A bug has been discovered in GPLPV that causes duplicate watches to be added when Windows resumes from a hibernate. I''m not completely sure at this point, but it appears that the firing of that many watches causes dom0 to overwrite data on the ring. Are there any protections in xenstored (which does the writing I think) against xenstore ring overflow caused by a large number (>23 I think) of watches firing in unison? I can''t see any... Obviously I''ll fix the GPLPV bug too, but it would be nice to know that too many watches wouldn''t break xenstore. Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2009-Oct-08 11:08 UTC
Re: [Xen-devel] xenstore ring overflow when too many watches are fired
On 08/10/2009 12:01, "James Harper" <james.harper@bendigoit.com.au> wrote:> A bug has been discovered in GPLPV that causes duplicate watches to be > added when Windows resumes from a hibernate. I''m not completely sure at > this point, but it appears that the firing of that many watches causes > dom0 to overwrite data on the ring. > > Are there any protections in xenstored (which does the writing I think) > against xenstore ring overflow caused by a large number (>23 I think) of > watches firing in unison? I can''t see any... > > Obviously I''ll fix the GPLPV bug too, but it would be nice to know that > too many watches wouldn''t break xenstore.Messages (whether replies or watch notifications) get stored on a per-connection linked list and trickled onto the shared ring as space becomes available. It shouldn''t be possible for the ring to overflow and eat its own tail. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
James Harper
2009-Oct-08 11:22 UTC
RE: [Xen-devel] xenstore ring overflow when too many watches are fired
> > Are there any protections in xenstored (which does the writing Ithink)> > against xenstore ring overflow caused by a large number (>23 Ithink) of> > watches firing in unison? I can''t see any... > > > > Messages (whether replies or watch notifications) get stored on a > per-connection linked list and trickled onto the shared ring as space > becomes available. It shouldn''t be possible for the ring to overflowand eat> its own tail. >Is it this function that prevents this tail-eating? bool domain_can_write(struct connection *conn) { struct xenstore_domain_interface *intf conn->domain->interface; return ((intf->rsp_prod - intf->rsp_cons) !XENSTORE_RING_SIZE); } I hope I''m not just too tired to be thinking about this, but wouldn''t that only return FALSE when the ring was full? It doesn''t guarantee that there is enough space to write a message, and doesn''t stop messages continuing to be written once the ring has overflowed. I can''t seen any other relevant reference to rsp_prod or rsp_cons in xenstored. James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2009-Oct-08 12:58 UTC
Re: [Xen-devel] xenstore ring overflow when too many watches are fired
On 08/10/2009 12:22, "James Harper" <james.harper@bendigoit.com.au> wrote:> Is it this function that prevents this tail-eating?No.> I hope I''m not just too tired to be thinking about this, but wouldn''t > that only return FALSE when the ring was full? It doesn''t guarantee that > there is enough space to write a message, and doesn''t stop messages > continuing to be written once the ring has overflowed. I can''t seen any > other relevant reference to rsp_prod or rsp_cons in xenstored.Try grep. See xenstored_domain.c:writechn(). Takes a bunch of bytes to write; returns how many were successfully written on this attempt. Uses rsp_cons (via get_output_chunk()). If we didn''t get this right, we''d be screwed for reading any xenstore node containing a large amount of data (we do sometimes have those). -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel