[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 4/6] xen/x86: Allow stubdom access to irq created for msi.



On Thu, Feb 07, 2019 at 03:57:54PM +0100, Roger Pau Monné wrote:
> On Thu, Feb 07, 2019 at 03:52:38PM +0100, Marek Marczykowski-Górecki wrote:
> > On Thu, Feb 07, 2019 at 02:21:27PM +0100, Marek Marczykowski-Górecki wrote:
> > > On Thu, Feb 07, 2019 at 10:57:19AM +0100, Roger Pau Monné wrote:
> > > > On Thu, Feb 07, 2019 at 01:07:47AM +0100, Marek Marczykowski-Górecki 
> > > > wrote:
> > > > > From: Simon Gaiser <simon@xxxxxxxxxxxxxxxxxxxxxx>
> > > > > 
> > > > > Stubdomains need to be given sufficient privilege over the guest 
> > > > > which it
> > > > > provides emulation for in order for PCI passthrough to work correctly.
> > > > > When a HVM domain try to enable MSI, QEMU in stubdomain calls
> > > > > PHYSDEVOP_map_pirq, but later it needs to call XEN_DOMCTL_bind_pt_irq 
> > > > > as
> > > > > part of xc_domain_update_msi_irq. Allow for that as part of
> > > > > PHYSDEVOP_map_pirq.
> > > > > 
> > > > > This is not needed for PCI INTx, because IRQ in that case is known
> > > > > beforehand and the stubdomain is given permissions over this IRQ by
> > > > > libxl__device_pci_add (there's a do_pci_add against the stubdomain).
> > > > > 
> > > > > Based on 
> > > > > https://github.com/OpenXT/xenclient-oe/blob/5e0e7304a5a3c75ef01240a1e3673665b2aaf05e/recipes-extended/xen/files/stubdomain-msi-irq-access.patch
> > > > >  by Eric Chanudet <chanudete@xxxxxxxxxxxx>.
> > > > > 
> > > > > Signed-off-by: Simon Gaiser <simon@xxxxxxxxxxxxxxxxxxxxxx>
> > > > > Signed-off-by: Marek Marczykowski-Górecki 
> > > > > <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
> > > > > ---
> > > > > Changes in v3:
> > > > >  - extend commit message
> > > > > Changes in v4:
> > > > >  - add missing destroy_irq on error path
> > > > > 
> > > > > With this patch, stubdomain will be able to create and map multiple 
> > > > > irq
> > > > > (DoS possibility?), as only target domain is validated in practice. Is
> > > > > that ok? If not, what additional limits could be applied here?
> > > > > In INTx case the problem doesn't apply, because toolstack grant access
> > > > > to particular IRQ and no allocation happen on stubdomain request. But 
> > > > > in
> > > > > MSI case, it isn't that easy as IRQ number isn't known before (as
> > > > > explained in the commit message).
> > > > > ---
> > > > >  xen/arch/x86/irq.c     | 24 ++++++++++++++++++++++++
> > > > >  xen/arch/x86/physdev.c |  9 +++++++++
> > > > >  2 files changed, 33 insertions(+)
> > > > > 
> > > > > diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> > > > > index 8b44d6c..5e5dcac 100644
> > > > > --- a/xen/arch/x86/irq.c
> > > > > +++ b/xen/arch/x86/irq.c
> > > > > @@ -2674,6 +2674,22 @@ int allocate_and_map_msi_pirq(struct domain 
> > > > > *d, int index, int *pirq_p,
> > > > >          {
> > > > >      case MAP_PIRQ_TYPE_MULTI_MSI:
> > > > >              irq = create_irq(NUMA_NO_NODE);
> > > > > +            if ( !(irq < nr_irqs_gsi || irq >= nr_irqs) &&
> > > > > +                    current->domain->target == d )
> > > > > +            {
> > > > > +                ret = irq_permit_access(current->domain, irq);
> > > > > +                if ( ret ) {
> > > > > +                    dprintk(XENLOG_G_ERR,
> > > > > +                            "dom%d: can't grant it's stubdom (%d) 
> > > > > access to "
> > > > > +                            "irq %d for msi: %d!\n",
> > > > > +                            d->domain_id,
> > > > > +                            current->domain->domain_id,
> > > > > +                            irq,
> > > > > +                            ret);
> > > > > +                    destroy_irq(irq);
> > > > > +                    return ret;
> > > > 
> > > > I'm afraid his won't work for devices that support multiple MSI vectors.
> > > > Note that map_domain_pirq also has a call to create_irq, and you are
> > > > not adding the sutbdom permissions there.
> > > > 
> > > > IMO, the safer way to fix this would be to modify create_irq and
> > > > destroy_irq so that you give permissions to the subtdomain in the same
> > > > place that hardware domain permissions are given. Note that you will
> > > > have to change the function to take an extra domain parameter
> > > > AFAICT.
> > > 
> > > That may be a good idea, I'll try.
> > 
> > Hmm, looking at the code, wouldn't it make sense to give device model
> > domain access to the IRQ _instead of_ hardware domain? If stubdomain is
> > in use, I don't see why dom0 would need access to that irq. Simply
> > provide what the device model domain is as parameter - either
> > hardware_domain, or stubdomain. Something like:
> > 
> >     create_irq(..., current->domain->target == d ? current->domain : 
> > hardware_domain);
> 
> Isn't there some cleanup that likely needs to be done by dom0 if it's
> not done by the stubdom, or in case the stubdom crashes for some
> reason?

I don't think toolstack know anything about IRQs allocated by device
model, looks like it does cleanup only for INTx interrupts.

> Or maybe that's already done on domain destruction by Xen itself, in
> which case not giving permissions to dom0 would be fine.

There is free_domain_pirqs() call in arch_domain_destroy(). But I don't
have device model reference there. Is there a way to get target ->
stubdomain mapping (other than iterating over all the domains)? I see
also domain->target field, which is the other way around.
The only thing needed is irq_deny_access() call there (in case of domain
ID reuse). Since such IRQs are not mapped to stubdomain itself,
free_domain_pirqs() for stubdomain will not clean this up.
Or maybe, _if stubdomain is guaranteed to be destroyed before its
target_, we can iterate over target domain's IRQs during stubdomain
destruction for this purpose?

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.