Justin T. Gibbs
2010-Jan-18 22:24 UTC
[Xen-devel] XenStore management with driver domains.
I''ve been experimenting with serving block storage between DomUs. I can dynamically attach storage, transfer data to my hearts content, but dynamic detach is providing some trouble. Both the front and backend drivers detach cleanly, but the XenStore data for the attachment persists, preventing the same storage object from being attached again. After tracing through Xend and the hotplug scripts, it seems that the current framework assumes backend teardown will occur in Dom0. For example, xen-hotplug-cleanup, which is invoked when the backend device instance is removed, removes the following paths from the xenstore: /local/domain/<front domid>/device/<type>/<devid> /local/domain/<back domid>/backend/<type>/<front domid>/<devid> /local/domain/<back domid>/error/backend/<type>/<front domid>/<devid> /vm/<front uuid>/device/<type>/<devid> Only Dom0 and the frontend have permissions to remove the frontend''s device tree. Only Dom0 and the backend have permissions to remove the backend''s device and error trees. Only Dom0 has permission to remove the vm device tree. So this script must be run from DomO to be fully successful. Confronted with this situation, I modified the front and backend drivers to clean up there respective /local/domain entries. I then modified Xend to provide the backend domain with permissions to remove the vm device tree. However, the backend would need the frontend''s vm path in order to find the vm device tree, and /local/domain/<dom id>/vm is not visible to all guests. The more I went down this path, the less I liked it. My current thinking is to make the XenStore management symmetrical. Xend creates all of these paths, so it should be responsible for removing them once both sides of a split driver transition to the closed state. There is a race condition in the case of quickly destroying and recreating the same device attachment. However, this type of race already exists for frontends and backends in guest domains. Only backends within Dom0 are protected by having their xenstore entries removed after udev has insured the driver instance has terminated. I don''t think protecting against this case will be difficult. Are there other options for fixing this problem I should consider? Thanks, Justin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel Stodden
2010-Jan-18 23:34 UTC
Re: [Xen-devel] XenStore management with driver domains.
On Mon, 2010-01-18 at 17:24 -0500, Justin T. Gibbs wrote:> I''ve been experimenting with serving block storage between DomUs. > I can dynamically attach storage, transfer data to my hearts content, > but dynamic detach is providing some trouble. Both the front and > backend drivers detach cleanly, but the XenStore data for the > attachment persists, preventing the same storage object from > being attached again. > > After tracing through Xend and the hotplug scripts, it seems that > the current framework assumes backend teardown will occur in Dom0. > For example, xen-hotplug-cleanup, which is invoked when the backend > device instance is removed, removes the following paths from the > xenstore: > > /local/domain/<front domid>/device/<type>/<devid> > /local/domain/<back domid>/backend/<type>/<front domid>/<devid> > /local/domain/<back domid>/error/backend/<type>/<front domid>/<devid> > /vm/<front uuid>/device/<type>/<devid> > > Only Dom0 and the frontend have permissions to remove the frontend''s > device tree. Only Dom0 and the backend have permissions to remove > the backend''s device and error trees. Only Dom0 has permission to > remove the vm device tree. So this script must be run from DomO to > be fully successful.> Confronted with this situation, I modified the front and backend drivers > to clean up there respective /local/domain entries. I then modified > Xend to provide the backend domain with permissions to remove the > vm device tree. However, the backend would need the frontend''s vm > path in order to find the vm device tree, and /local/domain/<dom id>/vm > is not visible to all guests. The more I went down this path, the less > I liked it.It''s indeed not a very good idea to do so. E.g. there are error conditions etc. meant to be gathered before the device is actually removed, especially backends. Usually the philosophy is to let the drivers control most connection state, but creation and removal is up to userspace. I would expect this to remain in dom0 even when I/O goes into drivers. Overall architecture question: Moving the data plane into backends is great. But why move control over device creation/removal into those domains as well? My understanding is that this is what you are doing.> My current thinking is to make the XenStore management symmetrical. Xend > creates all of these paths, so it should be responsible for removing them > once both sides of a split driver transition to the closed state.Not so good. E.g. in XCP a willingness to share a connection depends on both frontend and backend. Frontends may connect and reconnect as they see fit. A frontend disconnecting does nowhere mean the backend is disposable. Clean backend removal depends on connection state, but not exclusively.> There is a race condition in the case of quickly destroying and recreating > the same device attachment. > However, this type of race already exists for > frontends and backends in guest domains.> Only backends within > Dom0 are protected by having their xenstore entries removed after udev > has insured the driver instance has terminated.To check my understanding: So udev does the node removal by testing device//state == Closed? But there''s presently no serialization protecting against device recreation before that happened? Well, this just won''t work reliably. For a whole bunch of reasons. One is the recreation race you point out. The more general is that Closed state just reflects foreign politics to the backend, not backend state. There may be queues to be flushed, block devices to be closed, memory to freed, statistics to be gathered, userspace code to be triggered. etc. All that makes the worst case of a premature recreation even worse. Whoever creates the device (in XS) would better be responsible for removing it. Regarding the recreation race, it also gives create/remove serialization a place to live. Typically in code living in dom0. Cheers, Daniel> I don''t think protecting > against this case will be difficult. > > Are there other options for fixing this problem I should consider?_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel Stodden
2010-Jan-18 23:39 UTC
Re: [Xen-devel] XenStore management with driver domains.
On Mon, 2010-01-18 at 17:24 -0500, Justin T. Gibbs wrote:> I''ve been experimenting with serving block storage between DomUs. > I can dynamically attach storage, transfer data to my hearts content, > but dynamic detach is providing some trouble. Both the front and > backend drivers detach cleanly, but the XenStore data for the > attachment persists, preventing the same storage object from > being attached again. > > After tracing through Xend and the hotplug scripts, it seems that > the current framework assumes backend teardown will occur in Dom0. > For example, xen-hotplug-cleanup, which is invoked when the backend > device instance is removed, removes the following paths from the > xenstore: > > /local/domain/<front domid>/device/<type>/<devid> > /local/domain/<back domid>/backend/<type>/<front domid>/<devid> > /local/domain/<back domid>/error/backend/<type>/<front domid>/<devid> > /vm/<front uuid>/device/<type>/<devid>While you''re cleaning up: Do you consider relative paths? I think fully qualified names such /local/domain/%d/device/vbd/%d/%d is always wrong. Try using "device/vbd/%d/%d" instead. I also think it I''ve seen this make an unwelcome difference in permission checks for updates issued by domUs in the past. But this information may be outdated. Daniel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Justin T. Gibbs
2010-Jan-19 00:32 UTC
Re: [Xen-devel] XenStore management with driver domains.
On 1/18/2010 4:34 PM, Daniel Stodden wrote:> On Mon, 2010-01-18 at 17:24 -0500, Justin T. Gibbs wrote: > > I''ve been experimenting with serving block storage between DomUs. > > I can dynamically attach storage, transfer data to my hearts content, > > but dynamic detach is providing some trouble. Both the front and > > backend drivers detach cleanly, but the XenStore data for the > > attachment persists, preventing the same storage object from > > being attached again....> > Confronted with this situation, I modified the front and backend drivers > > to clean up there respective /local/domain entries. I then modified > > Xend to provide the backend domain with permissions to remove the > > vm device tree. However, the backend would need the frontend''s vm > > path in order to find the vm device tree, and /local/domain/<dom id>/vm > > is not visible to all guests. The more I went down this path, the less > > I liked it. > > It''s indeed not a very good idea to do so. E.g. there are error > conditions etc. meant to be gathered before the device is actually > removed, especially backends. Usually the philosophy is to let the > drivers control most connection state, but creation and removal is up to > userspace. I would expect this to remain in dom0 even when I/O goes into > drivers.Yes. My preference is to just update state in the domain local trees for the front and back ends and have the management domain (dom0) clean up the rest.> Overall architecture question: Moving the data plane into backends is > great. But why move control over device creation/removal into those > domains as well? My understanding is that this is what you are doing.I''m not proposing to change the current management model. My scenario is just "xm block-attach" followed by "xm block-detach" with the back-end in a guest domain. However the current model does not leave device connection management solely in the hands of Dom0. If either the front or back end encounter an error, they can start a chain of events that leads to disconnection and ultimately deletion of the front and back end device instances.> > My current thinking is to make the XenStore management symmetrical. Xend > > creates all of these paths, so it should be responsible for removing them > > once both sides of a split driver transition to the closed state. > > Not so good. E.g. in XCP a willingness to share a connection depends on > both frontend and backend. Frontends may connect and reconnect as they > see fit. A frontend disconnecting does nowhere mean the backend is > disposable.In this situation, isn''t the backend prevented from transitioning to the closed state due to "online" being 1? Granted, my understanding of this is based on reading the code, the wiki, and a few mailing list hits from Google. The exact semantics of the XenBus xenstore entries don''t seem to be rigorously documented anywhere.> Clean backend removal depends on connection state, but not exclusively.So what criteria should Dom0 use to determine that the backend device has been cleanly removed? It has to be something in the xenstore. Having the "Closed" state mean this seems as good a choice as anything else.> > There is a race condition in the case of quickly destroying andrecreating> > the same device attachment. > > However, this type of race already exists for > > frontends and backends in guest domains. > > > Only backends within > > Dom0 are protected by having their xenstore entries removed after udev > > has insured the driver instance has terminated. > > To check my understanding: So udev does the node removal by testing > device//state == Closed? But there''s presently no serialization > protecting against device recreation before that happened?My understanding is: The backend device is destroyed. This generates a udev removal event. Udev invokes the xen-hotplug-cleanup script, and the xenstore entries are removed. Xend will not allow the same device connection to be recreated until the xenstore entries are removed.> Well, this just won''t work reliably. For a whole bunch of reasons. One > is the recreation race you point out. The more general is that Closed > state just reflects foreign politics to the backend, not backend state. > There may be queues to be flushed, block devices to be closed, memory > to freed, statistics to be gathered, userspace code to be triggered. > etc. All that makes the worst case of a premature recreation even worse.Transitioning to the Closed state before all of the above is completed would, I believe, be an error.> Whoever creates the device (in XS) would better be responsible for > removing it. Regarding the recreation race, it also gives create/remove > serialization a place to live. Typically in code living in dom0.This is exactly my use case. However, Dom0 does not have full control over either creation or removal (especially error induced hot-unplug). The key here is to clean up the semantics and change Dom0''s handling of the xenstore so it doesn''t require an unplug of a backend device (reported to udev) in Dom0 to work. -- Justin _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel