[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] xenstore ring overflow when too many watches are fired

  • To: "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
  • Date: Thu, 8 Oct 2009 22:22:41 +1100
  • Cc:
  • Delivery-date: Thu, 08 Oct 2009 04:23:18 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcpIBq1xI/hXC2nOST2UVLFiVzrk3AAAQduAAABKkIA=
  • Thread-topic: [Xen-devel] xenstore ring overflow when too many watches are fired

> > Are there any protections in xenstored (which does the writing I
> > against xenstore ring overflow caused by a large number (>23 I
think) of
> > watches firing in unison? I can't see any...
> >
> Messages (whether replies or watch notifications) get stored on a
> per-connection linked list and trickled onto the shared ring as space
> becomes available. It shouldn't be possible for the ring to overflow
and eat
> its own tail.

Is it this function that prevents this tail-eating?

bool domain_can_write(struct connection *conn)
        struct xenstore_domain_interface *intf =
        return ((intf->rsp_prod - intf->rsp_cons) !=

I hope I'm not just too tired to be thinking about this, but wouldn't
that only return FALSE when the ring was full? It doesn't guarantee that
there is enough space to write a message, and doesn't stop messages
continuing to be written once the ring has overflowed. I can't seen any
other relevant reference to rsp_prod or rsp_cons in xenstored.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.