[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] XenStore management with driver domains.



I've been experimenting with serving block storage between DomUs.
I can dynamically attach storage, transfer data to my hearts content,
but dynamic detach is providing some trouble.  Both the front and
backend drivers detach cleanly, but the XenStore data for the
attachment persists, preventing the same storage object from
being attached again.

After tracing through Xend and the hotplug scripts, it seems that
the current framework assumes backend teardown will occur in Dom0.
For example, xen-hotplug-cleanup, which is invoked when the backend
device instance is removed, removes the following paths from the
xenstore:

  /local/domain/<front domid>/device/<type>/<devid>
  /local/domain/<back domid>/backend/<type>/<front domid>/<devid>
  /local/domain/<back domid>/error/backend/<type>/<front domid>/<devid>
  /vm/<front uuid>/device/<type>/<devid>

Only Dom0 and the frontend have permissions to remove the frontend's
device tree.  Only Dom0 and the backend have permissions to remove
the backend's device and error trees.  Only Dom0 has permission to
remove the vm device tree.  So this script must be run from DomO to
be fully successful.

Confronted with this situation, I modified the front and backend drivers
to clean up there respective /local/domain entries.  I then modified
Xend to provide the backend domain with permissions to remove the
vm device tree.  However, the backend would need the frontend's vm
path in order to find the vm device tree, and /local/domain/<dom id>/vm
is not visible to all guests.  The more I went down this path, the less
I liked it.

My current thinking is to make the XenStore management symmetrical.  Xend
creates all of these paths, so it should be responsible for removing them
once both sides of a split driver transition to the closed state.
There is a race condition in the case of quickly destroying and recreating
the same device attachment.  However, this type of race already exists for
frontends and backends in guest domains.  Only backends within
Dom0 are protected by having their xenstore entries removed after udev
has insured the driver instance has terminated.  I don't think protecting
against this case will be difficult.

Are there other options for fixing this problem I should consider?

Thanks,
Justin


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.