WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [PATCH] Re: [Xen-devel] Xen crash on dom0 shutdown

To: Jan Beulich <jbeulich@xxxxxxxxxx>
Subject: Re: [PATCH] Re: [Xen-devel] Xen crash on dom0 shutdown
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Wed, 24 Sep 2008 12:45:52 +0100
Cc: Yunhong Jiang <yunhong.jiang@xxxxxxxxx>, Haitao Shan <haitao.shan@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 24 Sep 2008 04:46:35 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C4FFC32C.27684%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AckeJb2U/BQsZIoYEd2egwAX8io7RQAFVrwG
Thread-topic: [PATCH] Re: [Xen-devel] Xen crash on dom0 shutdown
User-agent: Microsoft-Entourage/11.4.0.080122
On 24/9/08 10:13, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:

>> How about this one? It doesn't exactly follow the path you suggested,
>> i.e. doesn't mess with event channels, but rather marks the
>> irq<->vector mapping as invalid, allowing only a subsequent event
>> channel unbind (i.e. close) to recover from that state (which seems
>> better in terms of requiring proper discipline in the guest, as it
>> prevents re-using the irq or vector without properly cleaning up).
> 
> Yeah, this is the kind of thing I had in mind. I will work on this a bit more
> (e.g., need to synchronise on d->evtchn_lock to avoid racing
> EVTCHNOP_bind_pirq; also I'm afraid about leaking MSI vectors on domain
> destruction). Thanks.

Changeset 18539. I made the locking quite a bit stricter, by getting rid of
'irq_lock' and instead using 'evtchn_lock' (which may be better renamed now
to 'event_lock' since it isn't protecting just event channels now). Now the
pirq-to-vector mapping is protected by both evtchn_lock and irq_desc->lock.
A user of the mapping can protect themselves with either lock (whichever is
most convenient).

Some TODOs:
 * There is no management of MSI vectors. They always leak! I didn't fix
that here since it isn't a mere synchronisation race; the code just isn't
there.
 * I decided to WARN_ON(!spin_is_locked(&d->evtchn_lock)) in
pirq_guest_[un]bind(). The reason is that in any case those functions do not
expect to be re-entered -- they really want to be per-domain serialised.
Furthermore I am pretty certain that the HVM passthrough code is not
synchronising properly with changes to the pirq-to-vector mapping (it uses
domain_irq_to_vector() in many places with no care for locking) nor is it
synchronised with other users of pirq_guest_bind() --- a reasonable
semantics here would be that a domain pirq can be bound to once, either via
an event channel, or through a virtual PIC in HVM emulation context. I
therefore think that careful locking is required -- it may suffice to get
rid of (or at least make less use of) the hvm_domain.irq_lock and replace
its use with evtchn_lock (only consideration is that the latter is not
IRQ-safe). The WARN_ON() is a nice reminder of work to be done here. ;-)

Comments?

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel