[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 07/12] x86/IRQ: fix locking around vector management



> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Wednesday, May 8, 2019 9:16 PM
> 
> >>> On 08.05.19 at 15:10, <JBeulich@xxxxxxxx> wrote:
> > All of __{assign,bind,clear}_irq_vector() manipulate struct irq_desc
> > fields, and hence ought to be called with the descriptor lock held in
> > addition to vector_lock. This is currently the case for only
> > set_desc_affinity() (in the common case) and destroy_irq(), which also
> > clarifies what the nesting behavior between the locks has to be.
> > Reflect the new expectation by having these functions all take a
> > descriptor as parameter instead of an interrupt number.
> >
> > Also take care of the two special cases of calls to set_desc_affinity():
> > set_ioapic_affinity_irq() and VT-d's dma_msi_set_affinity() get called
> > directly as well, and in these cases the descriptor locks hadn't got
> > acquired till now. For set_ioapic_affinity_irq() this means acquiring /
> > releasing of the IO-APIC lock can be plain spin_{,un}lock() then.
> >
> > Drop one of the two leading underscores from all three functions at
> > the same time.
> >
> > There's one case left where descriptors get manipulated with just
> > vector_lock held: setup_vector_irq() assumes its caller to acquire
> > vector_lock, and hence can't itself acquire the descriptor locks (wrong
> > lock order). I don't currently see how to address this.
> >
> > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> > ---
> > v2: Also adjust set_ioapic_affinity_irq() and VT-d's
> >     dma_msi_set_affinity().
> 
> I'm sorry, Kevin, I should have Cc-ed you on this one.

Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx> for vtd part.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.