[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/9] x86/ecam: add handlers for the PVH Dom0 MMCFG areas



On Fri, May 19, 2017 at 07:25:22AM -0600, Jan Beulich wrote:
> >>> On 27.04.17 at 16:35, <roger.pau@xxxxxxxxxx> wrote:
> > @@ -1048,6 +1050,24 @@ static int __init pvh_setup_acpi(struct domain *d, 
> > paddr_t start_info)
> >      return 0;
> >  }
> >  
> > +int __init pvh_setup_ecam(struct domain *d)
> 
> While I won't object to the term ecam in title and description,
> please use mmcfg uniformly in code - that's the way we name
> the thing everywhere else.

OK.

> > +{
> > +    unsigned int i;
> > +    int rc;
> > +
> > +    for ( i = 0; i < pci_mmcfg_config_num; i++ )
> > +    {
> > +        rc = register_vpci_ecam_handler(d, pci_mmcfg_config[i].address,
> > +                                        
> > pci_mmcfg_config[i].start_bus_number,
> > +                                        pci_mmcfg_config[i].end_bus_number,
> > +                                        pci_mmcfg_config[i].pci_segment);
> > +        if ( rc )
> > +            return rc;
> > +    }
> > +
> > +    return 0;
> > +}
> 
> What about regions becoming available only post-boot?

This is not yet supported. It needs to be implemented using the
PHYSDEVOP_pci_mmcfg_reserved hypercall.

> > @@ -752,6 +754,14 @@ void hvm_domain_destroy(struct domain *d)
> >          list_del(&ioport->list);
> >          xfree(ioport);
> >      }
> > +
> > +    list_for_each_entry_safe ( ecam, etmp, 
> > &d->arch.hvm_domain.ecam_regions,
> > +                               next )
> > +    {
> > +        list_del(&ecam->next);
> > +        xfree(ecam);
> > +    }
> > +
> >  }
> 
> Stray blank line. Of course the addition is of questionable use
> anyway as long as all of this is Dom0 only.

Right, I just felt it would be better to do proper cleanup since it's
just a couple of lines.

> > --- a/xen/arch/x86/hvm/io.c
> > +++ b/xen/arch/x86/hvm/io.c
> > @@ -403,6 +403,145 @@ void register_vpci_portio_handler(struct domain *d)
> >      handler->ops = &vpci_portio_ops;
> >  }
> >  
> > +/* Handlers to trap PCI ECAM config accesses. */
> > +static struct hvm_ecam *vpci_ecam_find(struct domain *d, unsigned long 
> > addr)
> 
> Logically d should be a pointer to const, and I think no caller really
> needs you to return a pointer to non-const.
> > +{
> > +    struct hvm_ecam *ecam = NULL;
> 
> Pointless initializer.
> 
> > +static void vpci_ecam_decode_addr(struct hvm_ecam *ecam, unsigned long 
> > addr,
> 
> const

Fixed all the above.

> > +static int vpci_ecam_accept(struct vcpu *v, unsigned long addr)
> > +{
> > +    struct domain *d = v->domain;
> > +    int found;
> > +
> > +    vpci_lock(d);
> > +    found = !!vpci_ecam_find(v->domain, addr);
> 
> Please use the local variable consistently.
> 
> > +static int vpci_ecam_read(struct vcpu *v, unsigned long addr,
> 
> Did I overlook this in patch 1? Why is this a vcpu instead of a
> domain parameter? All of PCI is (virtual) machine wide...

That's what the hvm_mmio_ops struct expects (vcpu instead of domain),
which is where this function is used.

> > +                          unsigned int len, unsigned long *data)
> > +{
> > +    struct domain *d = v->domain;
> > +    struct hvm_ecam *ecam;
> > +    unsigned int bus, devfn, reg;
> > +    uint32_t data32;
> > +    int rc;
> > +
> > +    vpci_lock(d);
> > +    ecam = vpci_ecam_find(d, addr);
> > +    if ( !ecam )
> > +    {
> > +        vpci_unlock(d);
> > +        return X86EMUL_UNHANDLEABLE;
> > +    }
> > +
> > +    vpci_ecam_decode_addr(ecam, addr, &bus, &devfn, &reg);
> > +
> > +    if ( vpci_access_check(reg, len) || reg >= 0xfff )
> 
> So this function iirc allows only 1-, 2-, and 4-byte accesses. Other
> than with port I/O, MMCFG allows wider ones, and once again I
> don't think hardware would raise any kind of fault in such a case.
> The general expectation is for the fabric to split such accesses.

Hm, the PCIe spec is not authoritative in this regard, is states that
supporting 8B accesses is not mandatory. Xen/Linux/FreeBSD will never
attempt any access > 4B, hence I haven't coded this case.

Would you be fine with leaving this for later, or would you rather
have it implemented as part of this series?

> Also the reg check is once again off by one.

This is now gone, since reg cannot be > 0xfff in any case.

> > +int register_vpci_ecam_handler(struct domain *d, paddr_t addr,
> > +                               unsigned int start_bus, unsigned int 
> > end_bus,
> > +                               unsigned int seg)
> > +{
> > +    struct hvm_ecam *ecam;
> > +
> > +    ASSERT(is_hardware_domain(d));
> > +
> > +    vpci_lock(d);
> > +    if ( vpci_ecam_find(d, addr) )
> > +    {
> > +        vpci_unlock(d);
> > +        return -EEXIST;
> > +    }
> > +
> > +    ecam = xzalloc(struct hvm_ecam);
> 
> xmalloc() would again suffice afaict.

Right.

> > --- a/xen/include/asm-x86/hvm/domain.h
> > +++ b/xen/include/asm-x86/hvm/domain.h
> > @@ -100,6 +100,14 @@ struct hvm_pi_ops {
> >      void (*do_resume)(struct vcpu *v);
> >  };
> >  
> > +struct hvm_ecam {
> > +    paddr_t addr;
> > +    size_t size;
> > +    unsigned int bus;
> > +    unsigned int segment;
> > +    struct list_head next;
> > +};
> 
> If you moved the addition to hvm_domain_destroy() into a function
> in hvm/io.c, this type could be private to that latter file afaict.

Yes, I've now done that and named the function destroy_vpci_mmcfg.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.