[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: Interdomain comms

  • To: Harry Butterworth <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
  • From: Eric Van Hensbergen <ericvh@xxxxxxxxx>
  • Date: Sat, 7 May 2005 09:57:50 -0500
  • Cc: Mike Wray <mike.wray@xxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, "Ronald G. Minnich" <rminnich@xxxxxxxx>, Eric Van Hensbergen <ericvh@xxxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Sat, 07 May 2005 14:57:26 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=gv7PUy3ICs0B/g0Z96rnBryZhmb0/hDQIwoOBPxqOM0+U/1RImpUVFfDFIM+P4+K0/uyPstXa7Ppuh0CIYywAKmoWRE5jK8cfFXNDuZuW01d2HNwOJ5UP5eDBdBGqu3j8/lXHkRCGZrVwNzd3JsjpGz2taFHJ/4pvHaVsvm7ixM=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On 5/7/05, Harry Butterworth <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> I'd need help from security experts but, as an initial stab, if the
> idc_address and remote_buffer_references are capabilities then I think
> the security falls out in the wash since it's impossible to access
> something unless you have been granted permission.

That seems straightforward and clear to me on the local host, but it
seems like there might be additional concerns when bridging to the
cluster.  Of course, those security issues may be embedded in the
underlying network transport layer, so maybe its not as much of a

> OK, here goes:
> A significant difference between Plan 9 and Xen which is relevant to
> this discussion is that Plan 9 is designed to construct a single shared
> environment from multiple physical machines whereas Xen is designed to
> partition a single physical machine into multiple isolated environments.
> Arguably, Xen clusters might also partition multiple physical machines
> into multiple isolated environments with some weird and wonderful
> cross-machine sharing and replication going on.

Yes and no, Plan 9 does provide a coherent mechanism to unify access
to the resources of an entire cluster of physical machines -- but it
also provides a lot of facilities for partitioning and organizing
those resources into a private name spaces.  Its both of these aspects
that Ron and I would look to see leveraged in any sort of future Xen
I/O architecture.  But this gets more into organizational features,
which may be a separate topic.

> The significance of this difference is that in the Xen environment,
> there are many interesting opportunities for optimisations across the
> virtual machines running on the same physical machine. These
> optimisations are not relevant to a native Plan 9 system and so (AFAICT
> with 20 mins experience :-) ) there is no provision for them in 9P.

This is true.  The example you step through sounds like an
implementation of DSM targeted at a buffer cache.  In the past, 9P has
not been used to provide such a level of transparent sharing of
underlying memory.  However, an area that Orran and I have been
talking about is exploring the addition of scatter/gather type
semantics to the 9P protocol implementations (there's really not that
much that has to change in the specification, just some differences in
the way the protocol looks on the underlying transport).  In other
words, there is nothing to prevent read/write calls from having
pointers to the page containing the data versus having a copy of the
data.  This page containing the data could be a copy, or it could be
shared as in your example.  The cool thing is, since 9P is already
setup to be a network protocol, if it did end up having to leave
shared memory and go over an external wire, there's already a fairly
rich organizational infrastructure in place (with built-in support for

> Had the virtual machines been booting on different physical machines
> then the path through the FE and BE client code would have been
> identical (so we have met the network transparency goal) but the IDC
> implementation would have taken an alternative path upon discovering
> that the remote_buffer_reference was genuinely remote.

The scenario you walk through sounds really great, and providing
mechanisms to manage and recognize shared buffers on the same machine
sounds like absolutely the right thing to do.  I'm just arguing for a
simpler client interface (and I think something akin to the 9P API
together with a nice Channel abstraction to management endpoints would
be the right way to go about it).  Okay Ron, you got me into this,
what are your thoughts?

> Hopefully this explains better where I'm coming from.

This paints a clearer picture of what you were going after, and I
think we are on similar tracks.  I need to get more engaged in looking
at the Xen stuff so I have a better context for some of the problems
particular to its environment.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.