WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] xenstore ring overflow when too many watches are fired

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] xenstore ring overflow when too many watches are fired
From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
Date: Thu, 8 Oct 2009 22:01:25 +1100
Delivery-date: Thu, 08 Oct 2009 04:01:49 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcpIBq1xI/hXC2nOST2UVLFiVzrk3A==
Thread-topic: xenstore ring overflow when too many watches are fired
A bug has been discovered in GPLPV that causes duplicate watches to be
added when Windows resumes from a hibernate. I'm not completely sure at
this point, but it appears that the firing of that many watches causes
dom0 to overwrite data on the ring.

Are there any protections in xenstored (which does the writing I think)
against xenstore ring overflow caused by a large number (>23 I think) of
watches firing in unison? I can't see any...

Obviously I'll fix the GPLPV bug too, but it would be nice to know that
too many watches wouldn't break xenstore.

Thanks

James

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>