WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] /proc/xen/xenbus supports watch?

To: Rusty Russell <rusty@xxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] /proc/xen/xenbus supports watch?
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Fri, 23 Sep 2005 10:17:12 +0100
Cc: xen-devel List <xen-devel@xxxxxxxxxxxxxxxxxxx>, Christian Limpach <Christian.Limpach@xxxxxxxxxxxx>
Delivery-date: Fri, 23 Sep 2005 09:10:03 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1127429689.2722.2.camel@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <5d7aca9505090801025f3d5771@xxxxxxxxxxxxxx> <3d8eece205090803381b818f18@xxxxxxxxxxxxxx> <1126226609.25110.3.camel@xxxxxxxxxxxxxxxxxxxxx> <3d8eece205091302422ac74f77@xxxxxxxxxxxxxx> <1126657264.7896.20.camel@xxxxxxxxxxxxxxxxxxxxx> <1126689530.4415.10.camel@xxxxxxxxxxxxxxxxxxxxx> <3d8eece205091405555a2871fc@xxxxxxxxxxxxxx> <1126748390.12119.33.camel@xxxxxxxxxxxxxxxxxxxxx> <aad156145bec3bd706ef69c0e96341a7@xxxxxxxxxxxx> <1126945564.29203.116.camel@xxxxxxxxxxxxxxxxxxxxx> <bf4f0a8e8b96fd1ac2701daa78ca52c6@xxxxxxxxxxxx> <1127088661.23870.47.camel@xxxxxxxxxxxxxxxxxxxxx> <d7335251fd831e43f944d94e22da3878@xxxxxxxxxxxx> <1127214064.2656.45.camel@xxxxxxxxxxxxxxxxxxxxx> <152436486e4a36af94a87ad6d40a768e@xxxxxxxxxxxx> <1127354853.7567.6.camel@xxxxxxxxxxxxxxxxxxxxx> <b5423e9d922e98b290db80ff4d0cba9c@xxxxxxxxxxxx> <1127429689.2722.2.camel@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
But but but... it doesn't *help*.  That's the entire point!

OK, please describe, in simple terms, why you think save/restore is
different if we multiplex across a single transport?

Well, maybe there's not so much in it after all. I'll assume here we go for the 'xenstored forgets all state, and clients get EAGAIN at the first available opportunity' approach.

If we mux on a single transport:
1. The shared transport page is set up automatically in xenstored when the domain is restored. Xenstored has forgotten about any in-progress transactions. 2. The xenbus driver marks all file handles (or transaction structures, or whatever it uses to track local state for each local transaction) as doomed. Any further activity on those transactions returns EAGAIN rather than passing thru to xenstored.
 3. That's it! Clients detect failure and retry.

If we have page per transaction:
 1. Same as (1) above.
 2. Same as (2) above, but free the per-transaction transport page.
 3. Same as (3) above.

However, I'm not clear yet what each separate transport page represents. Is it a single transaction, or a connection that stores multiple watches and one transaction at a time? If the latter, save/restore gets a bit harder as the transport pages must be automatically re-registered and watches re-registered...

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel