[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/hvm: re-work viridian APIC assist code



On Thu, Aug 13, 2020 at 09:10:31AM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of David 
> > Woodhouse
> > Sent: 11 August 2020 14:25
> > To: Paul Durrant <paul.durrant@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; 
> > Roger Pau Monne
> > <roger.pau@xxxxxxxxxx>
> > Cc: Eslam Elnikety <elnikety@xxxxxxxxx>; Andrew Cooper 
> > <andrew.cooper3@xxxxxxxxxx>; Shan Haitao
> > <haitao.shan@xxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>
> > Subject: Re: [Xen-devel] [PATCH v2] x86/hvm: re-work viridian APIC assist 
> > code
> > 
> > Resending this straw man patch at Roger's request, to restart discussion.
> > 
> > Redux: In order to cope with the relatively rare case of unmaskable
> > legacy MSIs, each vlapic EOI takes a domain-global spinlock just to
> > iterate over all IRQs and determine that there's actually nothing to
> > do.
> > 
> > In my testing, I observe that this drops Windows performance on passed-
> > through NVMe from 2.2M IOPS down to about 1.0M IOPS.
> > 
> > I have a variant of this patch which just has a single per-domain "I
> > attached legacy unmaskable MSIs" flag, which is never cleared. The
> > patch below is per-vector (but Roger points out it should be per-vCPU
> > per-vector). I don't know that we really care enough to do more than
> > the single per-domain flag, which in real life would never happen
> > anyway unless you have crappy hardware, at which point you get back to
> > today's status quo.
> > 
> > My main concern is that this code is fairly sparsely documented and I'm
> > only 99% sure that this code path really *is* only for unmaskable MSIs,
> > and doesn't have some other esoteric use. A second opinion on that
> > would be particularly welcome.
> > 
> 
> The loop appears to be there to handle the case where multiple
> devices assigned to a domain have MSIs programmed with the same
> dest/vector... which seems like an odd thing for a guest to do but I
> guess it is at liberty to do it. Does it matter whether they are
> maskable or not?

Such configuration would never work properly, as lapic vectors are
edge triggered and thus can't be safely shared between devices?

I think the iteration is there in order to get the hvm_pirq_dpci
struct that injected that specific vector, so that you can perform the
ack if required. Having lapic EOI callbacks should simply this, as you
can pass a hvm_pirq_dpci when injecting a vector, and that would be
forwarded to the EOI callback, so there should be no need to iterate
over the list of hvm_pirq_dpci for a domain.

Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.