[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH][1/3] evtchn race condition


  • To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
  • From: "Woller, Thomas" <thomas.woller@xxxxxxx>
  • Date: Wed, 25 Jan 2006 09:56:02 -0600
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 25 Jan 2006 16:04:59 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcYhxsYQjOEmkwuTTaCDw9dO4eHSVgAAHPug
  • Thread-topic: [Xen-devel] [PATCH][1/3] evtchn race condition

> 
> The problem is that the hvm/io.c code is quite simply broken. 
> A correctly-implemented event recipient does not need to be 
> serialised w.r.t. evtchn_send() to work correctly. After all, 
> in the case of a paravirtualised guest, the recipient is not 
> in Xen at all!
> 
> The correct ordering for the recipient to clear an event is:
>   clear evtchn_upcall_pending
>   clear bits in evtchn_pending_sel before acting on them
>   clear bits in evtchn_pending before acting on them
> 
> So, for example, the code that checks evtchn_pending[] and 
> then clears a bit in evtchn_pending_sel is totally screwed. 
> It races evtchn_send() res-setting the evtchn_pending[] bit! 
> Fortunately, the comment that 'evtchn_pending_sel is shared 
> by other event channels' is actually false right now. The 
> *only* event channel a VMX domain cares about is its iopacket_port.

Ah... Ok. I'll rework the hvm code with the correct ordering and give it
a test...

Cheers,
tom
 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.