[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 03/11] x86/physdev: enable PHYSDEVOP_pci_mmcfg_reserved for PVH Dom0



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 23 February 2018 15:57
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; Roger Pau Monne
> <roger.pau@xxxxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx; Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>;
> Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> Subject: Re: [PATCH v8 03/11] x86/physdev: enable
> PHYSDEVOP_pci_mmcfg_reserved for PVH Dom0
> 
> >>> On 23.01.18 at 16:07, <roger.pau@xxxxxxxxxx> wrote:
> > So that MMCFG regions not present in the MCFG ACPI table can be added
> > at run time by the hardware domain.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> > ---
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > ---
> > Changes since v7:
> >  - Add newline in hvm_physdev_op for non-fallthrough case.
> >
> > Changes since v6:
> >  - Do not return EEXIST if the same exact region is already tracked by
> >    Xen.
> >
> > Changes since v5:
> >  - Check for has_vpci before calling register_vpci_mmcfg_handler
> >    instead of checking for is_hvm_domain.
> >
> > Changes since v4:
> >  - Change the hardware_domain check in hvm_physdev_op to a vpci check.
> >  - Only register the MMCFG area, but don't scan it.
> >
> > Changes since v3:
> >  - New in this version.
> > ---
> >  xen/arch/x86/hvm/hypercall.c |  5 +++++
> >  xen/arch/x86/hvm/io.c        | 16 +++++++++++-----
> 
> Sadly you forgot to Cc Paul for this one. Paul - any chance you could
> take a look?
> 

Sure. LGTM.

Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

> Jan
> 
> >  xen/arch/x86/physdev.c       | 11 +++++++++++
> >  3 files changed, 27 insertions(+), 5 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
> > index 5742dd1797..85eacd7d33 100644
> > --- a/xen/arch/x86/hvm/hypercall.c
> > +++ b/xen/arch/x86/hvm/hypercall.c
> > @@ -89,6 +89,11 @@ static long hvm_physdev_op(int cmd,
> > XEN_GUEST_HANDLE_PARAM(void) arg)
> >          if ( !has_pirq(curr->domain) )
> >              return -ENOSYS;
> >          break;
> > +
> > +    case PHYSDEVOP_pci_mmcfg_reserved:
> > +        if ( !has_vpci(curr->domain) )
> > +            return -ENOSYS;
> > +        break;
> >      }
> >
> >      if ( !curr->hcall_compat )
> > diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> > index 04425c064b..556810c126 100644
> > --- a/xen/arch/x86/hvm/io.c
> > +++ b/xen/arch/x86/hvm/io.c
> > @@ -507,10 +507,9 @@ static const struct hvm_mmio_ops
> vpci_mmcfg_ops = {
> >      .write = vpci_mmcfg_write,
> >  };
> >
> > -int __hwdom_init register_vpci_mmcfg_handler(struct domain *d,
> paddr_t
> > addr,
> > -                                             unsigned int start_bus,
> > -                                             unsigned int end_bus,
> > -                                             unsigned int seg)
> > +int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
> > +                                unsigned int start_bus, unsigned int
> > end_bus,
> > +                                unsigned int seg)
> >  {
> >      struct hvm_mmcfg *mmcfg, *new = xmalloc(struct hvm_mmcfg);
> >
> > @@ -535,9 +534,16 @@ int __hwdom_init
> register_vpci_mmcfg_handler(struct
> > domain *d, paddr_t addr,
> >          if ( new->addr < mmcfg->addr + mmcfg->size &&
> >               mmcfg->addr < new->addr + new->size )
> >          {
> > +            int ret = -EEXIST;
> > +
> > +            if ( new->addr == mmcfg->addr &&
> > +                 new->start_bus == mmcfg->start_bus &&
> > +                 new->segment == mmcfg->segment &&
> > +                 new->size == mmcfg->size )
> > +                ret = 0;
> >              write_unlock(&d->arch.hvm_domain.mmcfg_lock);
> >              xfree(new);
> > -            return -EEXIST;
> > +            return ret;
> >          }
> >
> >      if ( list_empty(&d->arch.hvm_domain.mmcfg_regions) )
> > diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
> > index 380d36f6b9..984491c3dc 100644
> > --- a/xen/arch/x86/physdev.c
> > +++ b/xen/arch/x86/physdev.c
> > @@ -557,6 +557,17 @@ ret_t do_physdev_op(int cmd,
> > XEN_GUEST_HANDLE_PARAM(void) arg)
> >
> >          ret = pci_mmcfg_reserved(info.address, info.segment,
> >                                   info.start_bus, info.end_bus, info.flags);
> > +        if ( !ret && has_vpci(currd) )
> > +        {
> > +            /*
> > +             * For HVM (PVH) domains try to add the newly found MMCFG to
> > the
> > +             * domain.
> > +             */
> > +            ret = register_vpci_mmcfg_handler(currd, info.address,
> > +                                              info.start_bus, info.end_bus,
> > +                                              info.segment);
> > +        }
> > +
> >          break;
> >      }
> >
> > --
> > 2.15.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.