[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 10/10] vpci/sriov: add support for SR-IOV capability



On Fri, Jun 29, 2018 at 04:27:08PM +0100, Wei Liu wrote:
> On Wed, Jun 20, 2018 at 04:42:34PM +0200, Roger Pau Monne wrote:
> > So that a PCI device that supports SR-IOV (PF) can enable the capability
> > and use the virtual functions.
> > 
> > This code is expected to only be used by privileged domains,
> > unprivileged domains should not get access to the SR-IOV capability.
> > 
> > The current code detects enabling of the virtual functions feature and
> > automatically adds the VFs to the domain. It also detects enabling of
> > memory space and maps the VFs BARs into the domain p2m. Disabling of
> > the VF enable bit removes the devices and the BAR memory map from the
> > domain p2m.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> [...]
> >      int rc;
> > @@ -261,12 +273,29 @@ static int modify_bars(const struct pci_dev *pdev, 
> > bool map, bool rom_only)
> >          }
> >      }
> >  
> > +    /* Get the parent dev if it's a VF. */
> > +    if ( pdev->info.is_virtfn )
> > +    {
> > +        pcidevs_lock();
> > +        parent = pci_get_pdev(pdev->seg, pdev->info.physfn.bus,
> > +                              pdev->info.physfn.devfn);
> > +        pcidevs_unlock();
> > +    }
> > +
> >      /*
> >       * Check for overlaps with other BARs. Note that only BARs that are
> >       * currently mapped (enabled) are checked for overlaps.
> >       */
> >      list_for_each_entry(tmp, &pdev->domain->arch.pdev_list, domain_list)
> >      {
> > +        /*
> > +         * When mapping the BARs of a VF the parent PF is already locked,
> > +         * trying to lock it will result in a deadlock. This is because
> > +         * vpci_modify_bars is called from the parent PF control_write 
> > register
> > +         * handler.
> > +         */
> > +        bool lock = parent != tmp;
> 
> There is spin_lock_recursive. Would that work?

Yes, but I wasn't sure whether this single usage of recursiveness
would be enough to switch the lock to a recursive one. I can switch it
if there's consensus.

> [...]
> > +static int init_sriov(struct pci_dev *pdev)
> > +{
> > +    unsigned int pos = pci_find_ext_capability(pdev->seg, pdev->bus,
> > +                                               pdev->devfn,
> > +                                               PCI_EXT_CAP_ID_SRIOV);
> > +    uint16_t total_vfs;
> > +
> > +    if ( !pos )
> > +        return 0;
> > +
> > +    total_vfs = pci_conf_read16(pdev->seg, pdev->bus, 
> > PCI_SLOT(pdev->devfn),
> > +                                PCI_FUNC(pdev->devfn),
> > +                                pos + PCI_SRIOV_TOTAL_VF);
> > +
> > +    pdev->vpci->sriov = xzalloc_bytes(SRIOV_SIZE(total_vfs));
> > +    if ( !pdev->vpci->sriov )
> > +        return -ENOMEM;
> > +
> > +    return vpci_add_register(pdev->vpci, vpci_hw_read16, control_write,
> > +                             pos + PCI_SRIOV_CTRL, 2, pdev->vpci->sriov);
> 
> If vpci_add_register fails sriov is going to be leaked? Or eventually
> that will cause the domain to crash in teardown.

sriov should be freed by vpci_remove_device, but I think I've missed
to add the free in there...

> I think it would make more sense if init_sriov is able to clean up after
> itself. MSI and MSI-X code has similar issue.

Yes, as said in previous patches I don't mind moving the freeing into
the teardown helpers (and also add frees in the init helpers in the
error cases).

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.