[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing



On Tue, 28 Aug 2018, Julien Grall wrote:
> Hi Stefano,
> 
> As Jan said on the previous version, the CC list is too short. All the REST
> should be included for public interface change. Please have a look at
> scripts/add_maintainers.pl, it will do the job for you...

Ah! I added all the REST below as "CC:" or "Cc:" as you can see, but
somehow they didn't get CCed to the mail. I'll double-check what the
hell git-send-mail is doing.


> On 11/08/18 01:00, Stefano Stabellini wrote:
> > From: Zhongze Liu <blackskygg@xxxxxxxxx>
> > 
> > Author: Zhongze Liu <blackskygg@xxxxxxxxx>
> > 
> > The existing XENMAPSPACE_gmfn_foreign subop of XENMEM_add_to_physmap forbids
> > a Dom0 to map memory pages from one DomU to another, which restricts some
> > useful
> > yet not dangerous use cases -- such as sharing pages among DomU's so that
> > they
> > can do shm-based communication.
> > 
> > This patch introduces XENMAPSPACE_gmfn_share to address this inconvenience,
> > which is mostly the same as XENMAPSPACE_gmfn_foreign but has its own xsm
> > check.
> > 
> > Specifically, the patch:
> > 
> > * Introduces a new av permission MMU__SHARE_MEM to denote if two domains can
> >    share memory by using the new subop;
> > * Introduces xsm_map_gmfn_share() to check if (current) has proper
> > permission
> >    over (t) AND MMU__SHARE_MEM is allowed between (d) and (t);
> > * Modify the default xen.te to allow MMU__SHARE_MEM for normal domains that
> >    allow grant mapping/event channels.
> > 
> > The new subop is marked unsupported for x86 because calling p2m_add_foregin
> > on two DomU's is currently not supported on x86.
> > 
> > This is for the proposal "Allow setting up shared memory areas between VMs
> > from xl config file" (see [1]).
> > 
> > [1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
> > 
> > Signed-off-by: Zhongze Liu <blackskygg@xxxxxxxxx>
> > Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx>
> > 
> > Cc: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
> > Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> > Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> > Cc: Julien Grall <julien.grall@xxxxxxx>
> > CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > CC: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
> > CC: Jan Beulich <jbeulich@xxxxxxxx>
> > CC: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> > CC: Tim Deegan <tim@xxxxxxx>
> > Cc: xen-devel@xxxxxxxxxxxxx
> > ---
> > Changes in v7:
> > - add additional checks
> > - update comments to reflect that
> > 
> > Changes in v5:
> > - fix coding style
> > - remove useless x86 hypervisor message for the unimplemented op
> > - code style
> > - improve/add comments
> > ---
> >   tools/flask/policy/modules/xen.if   |  2 ++
> >   xen/arch/arm/mm.c                   |  7 ++++++-
> >   xen/include/public/memory.h         |  8 ++++++++
> >   xen/include/xsm/dummy.h             | 13 +++++++++++++
> >   xen/include/xsm/xsm.h               |  6 ++++++
> >   xen/xsm/dummy.c                     |  1 +
> >   xen/xsm/flask/hooks.c               |  9 +++++++++
> >   xen/xsm/flask/policy/access_vectors |  5 +++++
> >   8 files changed, 50 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tools/flask/policy/modules/xen.if
> > b/tools/flask/policy/modules/xen.if
> > index 7aefd00..f841125 100644
> > --- a/tools/flask/policy/modules/xen.if
> > +++ b/tools/flask/policy/modules/xen.if
> > @@ -128,6 +128,8 @@ define(`domain_comms', `
> >     domain_event_comms($1, $2)
> >     allow $1 $2:grant { map_read map_write copy unmap };
> >     allow $2 $1:grant { map_read map_write copy unmap };
> > +   allow $1 $2:mmu share_mem;
> > +   allow $2 $1:mmu share_mem;
> >   ')
> >     # domain_self_comms(domain)
> > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > index d234c46..aa2e067 100644
> > --- a/xen/arch/arm/mm.c
> > +++ b/xen/arch/arm/mm.c
> > @@ -1245,6 +1245,7 @@ int xenmem_add_to_physmap_one(
> >             break;
> >       case XENMAPSPACE_gmfn_foreign:
> > +    case XENMAPSPACE_gmfn_share:
> >       {
> >           struct domain *od;
> >           p2m_type_t p2mt;
> > @@ -1259,7 +1260,11 @@ int xenmem_add_to_physmap_one(
> >               return -EINVAL;
> >           }
> >   -        rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
> > +        if ( space == XENMAPSPACE_gmfn_foreign )
> > +            rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
> > +        else
> > +            rc = xsm_map_gmfn_share(XSM_TARGET, d, od);
> > +
> >           if ( rc )
> >           {
> >               rcu_unlock_domain(od);
> > diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> > index bf2f81f..a706e3c 100644
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -227,6 +227,14 @@ DEFINE_XEN_GUEST_HANDLE(xen_machphys_mapping_t);
> >                                         Stage-2 using the Normal Memory
> >                                         Inner/Outer Write-Back Cacheable
> >                                         memory attribute. */
> > +#define XENMAPSPACE_gmfn_share   6 /* GMFN from another dom,
> > +                                      XENMEM_add_to_physmap_batch (and
> > +                                      currently ARM) only. Unlike
> > +                                      XENMAPSPACE_gmfn_foreign, it
> > +                                      requires current to have mapping
> > +                                      privileges instead of the
> > +                                      destination domain. */
> > +
> >   /* ` } */
> >     /*
> > diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
> > index ff6b2db..352a886 100644
> > --- a/xen/include/xsm/dummy.h
> > +++ b/xen/include/xsm/dummy.h
> > @@ -535,6 +535,19 @@ static XSM_INLINE int
> > xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
> >       return xsm_default_action(action, d, t);
> >   }
> >   +/*
> > + * Be aware that this is not an exact default equivalence of its flask
> > variant
> > + * which also checks if @d and @t "are allowed to share memory pages", for
> > we
> 
> s/for we/for now, we/ ?

OK


> > + * don't have a proper default equivalence of such a check.
> > + */
> > +static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
> > +                                         struct domain *t)
> > +{
> > +    XSM_ASSERT_ACTION(XSM_TARGET);
> > +    return xsm_default_action(XSM_TARGET, current->domain, d) ?:
> > +           xsm_default_action(action, current->domain, t);
> > +}
> > +
> >   static XSM_INLINE int xsm_hvm_param(XSM_DEFAULT_ARG struct domain *d,
> > unsigned long op)
> >   {
> >       XSM_ASSERT_ACTION(XSM_TARGET);
> > diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
> > index f0c6fc7..8873253 100644
> > --- a/xen/include/xsm/xsm.h
> > +++ b/xen/include/xsm/xsm.h
> > @@ -86,6 +86,7 @@ struct xsm_operations {
> >       int (*add_to_physmap) (struct domain *d1, struct domain *d2);
> >       int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
> >       int (*map_gmfn_foreign) (struct domain *d, struct domain *t);
> > +    int (*map_gmfn_share) (struct domain *d, struct domain *t);
> >       int (*claim_pages) (struct domain *d);
> >         int (*console_io) (struct domain *d, int cmd);
> > @@ -376,6 +377,11 @@ static inline int xsm_map_gmfn_foreign (xsm_default_t
> > def, struct domain *d, str
> >       return xsm_ops->map_gmfn_foreign(d, t);
> >   }
> >   +static inline int xsm_map_gmfn_share (xsm_default_t def, struct domain
> > *d, struct domain *t)
> > +{
> > +    return xsm_ops->map_gmfn_share(d, t);
> > +}
> > +
> >   static inline int xsm_claim_pages(xsm_default_t def, struct domain *d)
> >   {
> >       return xsm_ops->claim_pages(d);
> > diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
> > index 6e75119..04e91d3 100644
> > --- a/xen/xsm/dummy.c
> > +++ b/xen/xsm/dummy.c
> > @@ -124,6 +124,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
> >       set_to_dummy_if_null(ops, add_to_physmap);
> >       set_to_dummy_if_null(ops, remove_from_physmap);
> >       set_to_dummy_if_null(ops, map_gmfn_foreign);
> > +    set_to_dummy_if_null(ops, map_gmfn_share);
> >         set_to_dummy_if_null(ops, vm_event_control);
> >   diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> > index 78bc326..b5cbacc 100644
> > --- a/xen/xsm/flask/hooks.c
> > +++ b/xen/xsm/flask/hooks.c
> > @@ -1198,6 +1198,14 @@ static int flask_map_gmfn_foreign(struct domain *d,
> > struct domain *t)
> >       return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ |
> > MMU__MAP_WRITE);
> >   }
> >   +static int flask_map_gmfn_share(struct domain *d, struct domain *t)
> > +{
> > +    if ( current_has_perm(d, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE)
> > )
> > +        return rc;
> > +    return current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ |
> > MMU__MAP_WRITE) ?:
> > +           domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM);
> > +}
> > +
> >   static int flask_hvm_param(struct domain *d, unsigned long op)
> >   {
> >       u32 perm;
> > @@ -1822,6 +1830,7 @@ static struct xsm_operations flask_ops = {
> >       .add_to_physmap = flask_add_to_physmap,
> >       .remove_from_physmap = flask_remove_from_physmap,
> >       .map_gmfn_foreign = flask_map_gmfn_foreign,
> > +    .map_gmfn_share = flask_map_gmfn_share,
> >     #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
> >       .get_device_group = flask_get_device_group,
> > diff --git a/xen/xsm/flask/policy/access_vectors
> > b/xen/xsm/flask/policy/access_vectors
> > index c5d8548..4a92252 100644
> > --- a/xen/xsm/flask/policy/access_vectors
> > +++ b/xen/xsm/flask/policy/access_vectors
> > @@ -385,6 +385,11 @@ class mmu
> >   # Allow a privileged domain to install a map of a page it does not own.
> > Used
> >   # for stub domain device models with the PV framebuffer.
> >       target_hack
> > +# Checked when using XENMEM_add_to_physmap with XENMAPSPACE_gmfn_share
> > +# to share memory between two domains:
> > +#  source = domain whose memory is being shared
> > +#  target = client domain
> > +    share_mem
> >   }
> >     # control of the paging_domctl split by subop
> > 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.