WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] x86: allow NMI injection

To: Jan Beulich <jbeulich@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] x86: allow NMI injection
From: Keir Fraser <keir@xxxxxxxxxxxxx>
Date: Wed, 28 Feb 2007 11:59:58 +0000
Delivery-date: Wed, 28 Feb 2007 03:59:23 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <45E57803.76E4.0078.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcdbL/epNi3bbMcjEduA9AAX8io7RQ==
Thread-topic: [Xen-devel] [PATCH] x86: allow NMI injection
User-agent: Microsoft-Entourage/11.2.5.060620
On 28/2/07 11:39, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

> NetWare's internal debugger needs the ability to send NMI IPIs, and
> there is no reason to not allow domUs or dom0's vCPUs other than vCPU 0
> to handle NMIs (they just will never see hardware generated ones).
> 
> While currently not having a frontend, the added hypercall also allows
> for being used to inject NMIs into foreign VMs.
> 
> Along the lines, this fixes a potential race condition caused by
> previously accessing the VCPU flags field non-atomically in entry.S.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>

Notably this patch changes the way that a NMI handler is registered to use
the native-like vector 2. This changes the guest interface though. Do you
really need to be able to specify a custom CS? Can you not vector to the
flat CS and then far jump?

I'm not sure about making the IPI function a physdev_op(), since this is
still a virtual NMI (it has nothing to do with real hardware NMIs). It might
be better to make it a vcpu_op. Then it would not be a great fit to allow
send-to-all send-to-all-butself overrides, but I'm not sure how important
that optimisation is (or is to make the NMI deliveries as simultaneous as
possible)?

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>