[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Nonsensical XSM Flask denial



On Thu, Mar 17, 2022 at 2:14 PM Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx> wrote:
>
> On 17/03/2022 17:52, Jason Andryuk wrote:
> > I shut down a domU (HVM dom9 w/ Linux stubdom dom10) with a single PCI
> > device assigned.  Xen logged the following Flask denial for a second
> > PVH dom5 (uivm) without any PCI devices assigned.  This is Xen 4.14.4.
> >
> > (XEN) avc:  denied  { remove_irq } for domid=5 irq=17
> > scontext=system_u:system_r:uivm_t
> > tcontext=system_u:object_r:shared_irq_t tclass=resource
> >
> > Domain 5 as uivm_t and irq 17 as shared_irq_t both look correct.  But
> > it doesn't make sense that uivm would make a hypercall for an irq.
> >
> > Could this be from RCU calling complete_domain_destroy() when current
> > is dom5 (uivm)?  What would current be set to when RCU runs its
> > callbacks?
>
> RCU runs in softirq context, so yes - (almost) any use of current would
> be bogus.
>
> But I can't spot any overlap between the physdevop_unmap_pirq XSM check,
> and complete_domain_destroy().
>
> Any chance you can reproduce this with a WARN() in the AVC denied path,
> so we can see what's going on here?

The path I found reading is:
complete_domain_destroy
  arch_domain_destroy
    free_domain_pirqs
      unmap_domain_pirq
        xsm_unmap_domain_irq

After a few tries it triggered:
(XEN) Xen WARN at irq.c:2348
(XEN) ----[ Xen-4.14.4-xc  x86_64  debug=n   Not tainted ]----
(XEN) CPU:    4
(XEN) RIP:    e008:[<ffff82d0403115d4>] unmap_domain_pirq+0x3c4/0x490
...
(XEN) Xen call trace:
(XEN)    [<ffff82d0403115d4>] R unmap_domain_pirq+0x3c4/0x490
(XEN)    [<ffff82d04022ac52>] S xmem_pool_free+0x22/0x2f0
(XEN)    [<ffff82d0403116f1>] S free_domain_pirqs+0x51/0x70
(XEN)    [<ffff82d040301615>] S arch_domain_destroy+0x45/0xb0
(XEN)    [<ffff82d040208391>] S domain.c#complete_domain_destroy+0x81/0x150
(XEN)    [<ffff82d040221dc4>] S rcupdate.c#rcu_process_callbacks+0x114/0x2b0
(XEN)    [<ffff82d04022244a>] S softirq.c#__do_softirq+0x5a/0xa0
(XEN)    [<ffff82d04029890b>] S vmx_asm_do_vmentry+0x2b/0x30

I found the XSM checks a little confusing.

physdevop_unmap_pirq() calls:
  xsm_unmap_domain_pirq() <- checks generic resource remove
  unmap_domain_pirq()
    xsm_unmap_domain_irq() <- checks remove_irq for the specific irq

access_vectors lists physdevop_unmap_pirq as remove_irq, but the xsm
check in the function is xsm_unmap_domain_pirq, which doesn't use
remove_irq.

In an earlier Xen version, these RCU callbacks may have run as xen_t?
Or maybe it's just racy which context is used?  commit
8ad651705cbd0ad192398c1513d12c02b3197fa1 had:

    2. When a domain is destroyed with a device passthrough active, the
    calls to remove_{irq,ioport,iomem} can be made by the hypervisor itself
    (which results in an XSM check with the source xen_t).  It does not make
    sense to deny these permissions; no domain should be using xen_t, and
    forbidding the hypervisor from performing cleanup is not useful.

+# Domain destruction can result in some access checks for actions performed by
+# the hypervisor.  These should always be allowed.
+allow xen_t resource_type : resource { remove_irq remove_ioport remove_iomem };

Thanks,
Jason



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.