[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/physdev: Call xsm_unmap_domain_irq earlier



On 3/25/22 10:18, Jason Andryuk wrote:
> Pull the XSM check up out of unmap_domain_pirq into physdev_map_pirq.
> 
> xsm_unmap_domain_irq was seen denying unmap_domain_pirq when called from
> complete_domain_destroy as an RCU callback.  The source context was an
> unexpected, random domain.  Since this is a xen-internal operation,
> going through the XSM hook is inapproriate.

To me this is the first problem that should be addressed. Why is current
pointing at a random domain and is it possible to ensure it gets
correctly set to the current domain, e.g. DOMID_IDLE or DOMID_XEN.

> Move the XSM hook up into physdev_unmap_pirq, which is the
> guest-accessible path.  This requires moving some of the sanity check
> upwards as well since the hook needs the additional data to make its
> decision.  Since complete_domain_destroy still calls unmap_domain_pirq,
> replace the moved runtime checking with assert.  Only valid pirqs should
> make their way into unmap_domain_pirq from complete_domain_destroy.

This is moving the time of check a way from the time of use. While today
it may be safe because it only gives guest access through a limited
interface, vpci_msi_disabled, this now introduces risk through the
assumption that no future code will make this call without making the
appropriate XSM call when coming processing a guest request. This is one
of many reasons why I would dissuade moving resource access checks away
from the resource access.

While it is related, in this thread I will not disagree with your point
that XSM calls on internal calls has no meaning at this point. Still we
should not weaken the protection, is there any other way we can
determine the call is being made internally, like I suggested above in
getting `current` set to a system domain and then update fix the default
policy to allow system domains to make the call and those using flask
just update their policy to allow system domains to make the call.

> This is mostly code movement, but one style change is to pull `irq =
> info->arch.irq` out of the if condition.
> 
> Label done is now unused and removed.
> 
> Signed-off-by: Jason Andryuk <jandryuk@xxxxxxxxx>
> ---
> unmap_domain_pirq is also called in vioapic_hwdom_map_gsi and
> vpci_msi_disable.  vioapic_hwdom_map_gsi is a cleanup path after going
> through map_domain_pirq, and I don't think the vpci code is directly
> guest-accessible.  So I think those are okay, but I not familiar with
> that code.  Hence, I am highlighting it.
> 
>  xen/arch/x86/irq.c     | 31 +++++++-----------------------
>  xen/arch/x86/physdev.c | 43 +++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 49 insertions(+), 25 deletions(-)
> 
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index 285ac399fb..ddd3194fba 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -2310,41 +2310,25 @@ int unmap_domain_pirq(struct domain *d, int pirq)
>      struct pirq *info;
>      struct msi_desc *msi_desc = NULL;
>  
> -    if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
> -        return -EINVAL;
> -
> +    ASSERT(pirq >= 0);
> +    ASSERT(pirq < d->nr_pirqs);
>      ASSERT(pcidevs_locked());
>      ASSERT(spin_is_locked(&d->event_lock));
>  
>      info = pirq_info(d, pirq);
> -    if ( !info || (irq = info->arch.irq) <= 0 )
> -    {
> -        dprintk(XENLOG_G_ERR, "dom%d: pirq %d not mapped\n",
> -                d->domain_id, pirq);
> -        ret = -EINVAL;
> -        goto done;
> -    }
> +    ASSERT(info);
> +
> +    irq = info->arch.irq;
> +    ASSERT(irq > 0);
>  
>      desc = irq_to_desc(irq);
>      msi_desc = desc->msi_desc;
>      if ( msi_desc && msi_desc->msi_attrib.type == PCI_CAP_ID_MSI )
>      {
> -        if ( msi_desc->msi_attrib.entry_nr )
> -        {
> -            printk(XENLOG_G_ERR
> -                   "dom%d: trying to unmap secondary MSI pirq %d\n",
> -                   d->domain_id, pirq);
> -            ret = -EBUSY;
> -            goto done;
> -        }
> +        ASSERT(msi_desc->msi_attrib.entry_nr == 0);
>          nr = msi_desc->msi.nvec;
>      }
>  
> -    ret = xsm_unmap_domain_irq(XSM_HOOK, d, irq,
> -                               msi_desc ? msi_desc->dev : NULL);
> -    if ( ret )
> -        goto done;
> -
>      forced_unbind = pirq_guest_force_unbind(d, info);
>      if ( forced_unbind )
>          dprintk(XENLOG_G_WARNING, "dom%d: forcing unbind of pirq %d\n",
> @@ -2405,7 +2389,6 @@ int unmap_domain_pirq(struct domain *d, int pirq)
>      if (msi_desc)
>          msi_free_irq(msi_desc);
>  
> - done:
>      return ret;
>  }
>  
> diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
> index 2ddcf44f33..a5ed257dca 100644
> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -140,8 +140,11 @@ int physdev_map_pirq(domid_t domid, int type, int 
> *index, int *pirq_p,
>  
>  int physdev_unmap_pirq(domid_t domid, int pirq)
>  {
> +    struct msi_desc *msi_desc;
> +    struct irq_desc *desc;
> +    struct pirq *info;
>      struct domain *d;
> -    int ret = 0;
> +    int irq, ret = 0;
>  
>      d = rcu_lock_domain_by_any_id(domid);
>      if ( d == NULL )
> @@ -162,9 +165,47 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
>              goto free_domain;
>      }
>  
> +    if ( (pirq < 0) || (pirq >= d->nr_pirqs) ) {
> +        ret = -EINVAL;
> +        goto free_domain;
> +    }
> +
>      pcidevs_lock();
>      spin_lock(&d->event_lock);
> +
> +    info = pirq_info(d, pirq);
> +    irq = info ? info->arch.irq : 0;
> +    if ( !info || irq <= 0 )
> +    {
> +        dprintk(XENLOG_G_ERR, "dom%d: pirq %d not mapped\n",
> +                d->domain_id, pirq);
> +        ret = -EINVAL;
> +        goto unlock;
> +    }
> +
> +    desc = irq_to_desc(irq);
> +    msi_desc = desc->msi_desc;
> +    if ( msi_desc && msi_desc->msi_attrib.type == PCI_CAP_ID_MSI )
> +    {
> +        if ( msi_desc->msi_attrib.entry_nr )
> +        {
> +            printk(XENLOG_G_ERR
> +                   "dom%d: trying to unmap secondary MSI pirq %d\n",
> +                   d->domain_id, pirq);
> +            ret = -EBUSY;
> +            goto unlock;
> +        }
> +    }
> +
> +    ret = xsm_unmap_domain_irq(XSM_HOOK, d, irq,
> +                               msi_desc ? msi_desc->dev : NULL);
> +    if ( ret )
> +        goto unlock;
> +
>      ret = unmap_domain_pirq(d, pirq);
> +
> + unlock:
> +
>      spin_unlock(&d->event_lock);
>      pcidevs_unlock();
>  



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.