WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] /proc/xen/xenbus supports watch?

To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] /proc/xen/xenbus supports watch?
From: Rusty Russell <rusty@xxxxxxxxxxxxxxx>
Date: Mon, 26 Sep 2005 09:06:03 +1000
Cc: xen-devel List <xen-devel@xxxxxxxxxxxxxxxxxxx>, Christian Limpach <Christian.Limpach@xxxxxxxxxxxx>
Delivery-date: Sun, 25 Sep 2005 23:03:41 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4318f53c57216e19ba81c096b4a0c849@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <5d7aca9505090801025f3d5771@xxxxxxxxxxxxxx> <3d8eece205090803381b818f18@xxxxxxxxxxxxxx> <1126226609.25110.3.camel@xxxxxxxxxxxxxxxxxxxxx> <3d8eece205091302422ac74f77@xxxxxxxxxxxxxx> <1126657264.7896.20.camel@xxxxxxxxxxxxxxxxxxxxx> <1126689530.4415.10.camel@xxxxxxxxxxxxxxxxxxxxx> <3d8eece205091405555a2871fc@xxxxxxxxxxxxxx> <1126748390.12119.33.camel@xxxxxxxxxxxxxxxxxxxxx> <aad156145bec3bd706ef69c0e96341a7@xxxxxxxxxxxx> <1126945564.29203.116.camel@xxxxxxxxxxxxxxxxxxxxx> <bf4f0a8e8b96fd1ac2701daa78ca52c6@xxxxxxxxxxxx> <1127088661.23870.47.camel@xxxxxxxxxxxxxxxxxxxxx> <d7335251fd831e43f944d94e22da3878@xxxxxxxxxxxx> <1127214064.2656.45.camel@xxxxxxxxxxxxxxxxxxxxx> <152436486e4a36af94a87ad6d40a768e@xxxxxxxxxxxx> <1127354853.7567.6.camel@xxxxxxxxxxxxxxxxxxxxx> <b5423e9d922e98b290db80ff4d0cba9c@xxxxxxxxxxxx> <1127429689.2722.2.camel@xxxxxxxxxxxxxxxxxxxxx> <785f15905bfe17d87d6bd0eb878cc166@xxxxxxxxxxxx> <1127618982.796.71.camel@xxxxxxxxxxxxxxxxxxxxx> <39daa0554066842da8701a90d9f01386@xxxxxxxxxxxx> <4318f53c57216e19ba81c096b4a0c849@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Sun, 2005-09-25 at 12:33 +0100, Keir Fraser wrote:
> On 25 Sep 2005, at 12:02, Keir Fraser wrote:
> 
> > Yeah, I can live with this, although: What about multiple transactions 
> > within the kernel? Do you plan to continue serialising them (e.g., on 
> > a waitqueue)? An advantage of mux/demux would be that concurrent 
> > kernel transactions could easily use the same mechanism. Your scheme 
> > places restart mechanisms in user space, so they're out of reach for 
> > kernel transactions.

We already have the mechanism: xenbus_lock.  I don't think we want to go
for parallelism within the kernel for xenstore comms: it'd be a fair
amount of pain for something which isn't exactly speed critical.  Like
Andrew said, I can't transactions getting significantly longer.

> Also, page-per-connection won't entirely avoid sharing of 
> state/resource in xenstored. At some point we'll want to add per-domain 
> access policy, and space/bandwidth quotas (to prevent DoS). All of 
> those must be shared between the multiple connections of a domain -- so 
> the separate connections aren't as independent as you might like.

We already have a permissions model based on domid (although not
actually enforced due to a bug: we can fix this with one line but will
require xend fixups I imagine).  Space quotas will have to be by ID,
too, not by the connection(s) which created them: in the case of
migration, the store will be recreated by the tools, but should still be
counted against the ID which owns them.  So even if we multiplexed all
the connections together for one domain they would still have to be
separate.

Bandwidth quotas are and interesting idea: I was thinking of a dumb
fairness scheme.  We almost do this: we rotate the list of connections,
but there's a FIXME about the unfair way we service domain pages.  Or we
could just measure the time we spend servicing each connection, and put
the slowest ones at the tail... (socket connections would be immune,
since we trust dom0 tools).  I haven't thought too hard about it.

Thanks, I'll update the TODO file...
Rusty.
-- 
A bad analogy is like a leaky screwdriver -- Richard Braakman


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel