WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] XenStore management with driver domains.

To: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] XenStore management with driver domains.
From: "Justin T. Gibbs" <gibbs@xxxxxxxxxxx>
Date: Mon, 18 Jan 2010 15:24:11 -0700
Delivery-date: Mon, 18 Jan 2010 14:24:41 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: SCSIGuy.com
Reply-to: gibbs@xxxxxxxxxxx
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.5) Gecko/20091204 Thunderbird/3.0
I've been experimenting with serving block storage between DomUs.
I can dynamically attach storage, transfer data to my hearts content,
but dynamic detach is providing some trouble.  Both the front and
backend drivers detach cleanly, but the XenStore data for the
attachment persists, preventing the same storage object from
being attached again.

After tracing through Xend and the hotplug scripts, it seems that
the current framework assumes backend teardown will occur in Dom0.
For example, xen-hotplug-cleanup, which is invoked when the backend
device instance is removed, removes the following paths from the
xenstore:

  /local/domain/<front domid>/device/<type>/<devid>
  /local/domain/<back domid>/backend/<type>/<front domid>/<devid>
  /local/domain/<back domid>/error/backend/<type>/<front domid>/<devid>
  /vm/<front uuid>/device/<type>/<devid>

Only Dom0 and the frontend have permissions to remove the frontend's
device tree.  Only Dom0 and the backend have permissions to remove
the backend's device and error trees.  Only Dom0 has permission to
remove the vm device tree.  So this script must be run from DomO to
be fully successful.

Confronted with this situation, I modified the front and backend drivers
to clean up there respective /local/domain entries.  I then modified
Xend to provide the backend domain with permissions to remove the
vm device tree.  However, the backend would need the frontend's vm
path in order to find the vm device tree, and /local/domain/<dom id>/vm
is not visible to all guests.  The more I went down this path, the less
I liked it.

My current thinking is to make the XenStore management symmetrical.  Xend
creates all of these paths, so it should be responsible for removing them
once both sides of a split driver transition to the closed state.
There is a race condition in the case of quickly destroying and recreating
the same device attachment.  However, this type of race already exists for
frontends and backends in guest domains.  Only backends within
Dom0 are protected by having their xenstore entries removed after udev
has insured the driver instance has terminated.  I don't think protecting
against this case will be difficult.

Are there other options for fixing this problem I should consider?

Thanks,
Justin


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>