[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 10/11] vpci: Add initial support for virtual PCI bus topology


  • To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 3 Nov 2021 09:52:35 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8x6O0c1k5jidd0A8KSdXOJIhCxTFdvEp4ZSD+3KyEwo=; b=TTGNef4Jn71HaT0dxwaROluXRZZ1Xcq2CaoBotBDzjyVpMqQSaDLkvaNPm33sLzuOiU4OU//a8ywm2YKGDxn9172feo7cAtmBHmoe2WRL9JMBJ0yflCGSDp/lxh0/H5QdVtymyfURvLTA2BisTd2ilSXkzCNG+gTK/8cEqssmxCinBxJox7Q0Wxz76eJPJupsDJClPeW7sd6wXs8WLN0WLEUMTtQCJCyWSKJG/4u/l8EdVJyCTLy9WNMfDtl+1kMeQlckiaMcHSxapX9RaNelY3p1As3wzCybEb2ZWO4bCakKDFyasT7oKjOYKfuG7l9XYIO7qxnsMAj7XMGis+TsA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=k2L32d1u2bFB7Nn2wjUyih5UqOfhN+ccuJyqZrqNvC+B23RKrgnKhd9EYRswsAifXIt8Xc/24wIF6htHKMQzxtP8oJeO2UfBycXoQ2UQwTaDX2ps+C0Op9qVjG1RUalRipupfBx1bt0K/QMuS7Cwprv/ITIzBnaVXME9nGgxVjVOC75Zcw+8jlAiBL8GK0fQKv6LVjvJP0jMf8owV42TpnxQfHlZhwkPgJ/qoaNoO62AgS8U4eKnJMzIeBnQw0QWX4PmBHxOOwBvlhMUxVusDuzsyKXack3Fgnjknj5d1uMKM9CeUZywEmyTUyNYgQYC4RyErScDAhKfBH79Va5d2Q==
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "jbeulich@xxxxxxxx" <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>
  • Delivery-date: Wed, 03 Nov 2021 08:53:00 +0000
  • Ironport-data: A9a23:87YEsKtT0/LGZxpzAlVP19Wd4+fnVEhYMUV32f8akzHdYApBsoF/q tZmKTvXOvnbMGT2e952YIrl9xtU78fczYJkHldrqis9HyND+JbJXdiXEBz9bniYRiHhoOOLz Cm8hv3odp1coqr0/0/1WlTZQP0VOZigHtIQMsadUsxKbVIiGHhJZS5LwbZj29cx2YPhWWthh PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ Nplkri+a1kiYI7woPlNUTpFMnBUBKQY5+qSSZS/mZT7I0zudnLtx7NlDV0sPJ1e8eFyaY1M3 aVGcnZXNEnF3r/ohuLgIgVvrp1LwM3DJoQQt2sm1TjEJf0nXYrCU+PB4towMDIY254STKmFO 5JxhTxHNleZQEZGC2YsWYMSpvq1pCHAcWFTtwfAzUYwyzeKl1EguFT3C/LrfdiNSdRQj1yvj GvM9GTkATkXLNWajzGC9xqEoevCnjjyXo4II4Gp7f5hgFCVxWs7BQUfUB2wpvzRomekR99aH GkF9SMvoLYa+VSiS5/2WBjQiGSNvgMYHcFRFeI6wAiXz+zf5APxLmIJVCJbYdoq8so/XyU31 0ShlsnsQzdotdW9S3iQ67OVpjOaIjUOICkJYipsZSwB7tr4qYc/lCXmSNp5DbW1hd34HzL36 z2SpS14jLIW5eYb2qP+8V3ZjjaEopnSUhVz9gjRRnii7A5yeMiifYPAwUPA8f9KIYKdT1+Al HsJgc6T6KYJF57lvC6QROQAGpm56vDDNyfT6WODBLF4qW7roST6O9kNvncufy+FL/roZxfOR EHx5wgPx6NrISuoTrJWMqiIApo1mP2I+cveatjYad9HY55UfQCB/T1zaUP4410BgHTAgolkZ 87FLJ/E4WIyTP0+kWHoH7t1PaoDn3hmnQvuqYbHIwNLOFZ0TFqcUv87PVSHdYjVB4vU8VyOo 76z2yZnoiizsdESgAGLrub/znhQdBDX4KwaTeQNKIZvxSI8SQkc5wf5m+9JRmCct/09eh301 n+8QFRE71H0mGfKLw6HAlg6NuiyDckv8yphYXxzVbpN55TFSdzxhEv4X8BvFYTLCcQ5laIkJ xX7U5zYahiwdtg302tENsSsxGCTXB+qmRiPL0KYjMsXJPZdq/jy0oa8JGPHrXBWZgLu7JdWi +Dwh2vzHMtYLyw/XZm+VR5a5w7o1ZTrsLkpBBWgzxg6UBiEzbWG3ASq0qRse5FQck6ertZYv i7PaSolSSD2i9Zd2PHChLyerpfvFO17H0FAGHLc46rwPi7flldPC6cZOApRVTyCBm7y5ou4Y uBZk6P1PPEdxQ4YuItgCbd7i6k54oK39bNdyw1lGlTNbkiqVew8ciXXg5EXu/0f3KJdtCu3R lmLpotQN4KWNZ63C1UWPgckMLiOjKlGhjnI4P0pC0zm/ysrrqGfWEBfMkDU2ixQJbd4Kq0/x uIltJJE4gCzkENyYN2HkjpV5yKHKXlZC/crsZQTAYnKjAs3yw4dPcyAW3GuuJzWModCKEgnJ DOQlZHuvbUEyxqQaWc3GFjMwfFZ2cYEtidVwQJQPF+OgNfE2KM6hUUD7TQtQw1J5RxbyOYva HNzPkh4KKjSrTdlgM9PAzKlFw1bXUDL/0Xwzx0ClXHDTlnuXWvIdTVvNeGI9UEf0mRdYjkEo +3IlDe7CW7nLJPrwy8/eU95sPiyH9V++zrLlN2jA8nYTYIxZiDog/P2aGcFw/c97RjdWKETS TFWwdtN
  • Ironport-hdrordr: A9a23:u4lv86m9B0kGRyybm8UImvzkcf7pDfIo3DAbv31ZSRFFG/Fw8P re+8jztCWE7Ar5PUtKpTnuAsW9qB/nmqKdgrNwAV7BZmfbUQKTRekJgLcKqAeAJwTOssJbyK d8Y+xfJbTLfD1HZB/BkWqF+gAbsbu6zJw=
  • Ironport-sdr: mvcing9vIb79ISdWVczcQYaKR260m4EHJa+fFhpi75b7IQgc2n637uCW6blQZBoZv0Ocu3C82s aOzSqYDgf6fk2IvutkjUbSXwYZlPLmVGFMoDusLGovHqi4eLGkKH+PCNr8pGx9NtkUaCOUeT75 2lDaa2YCsjzesz/yxaNl3/uQbqOFzoGbhrKRAJ/dTRv0v0w4fZX6mgl+xnaK+2aaUjc0PRmgrw tAqNJ8kk7fUIV1zzu2pHrqEpaQUoKwrOw872VQrSpWvN/kyNcTmhM2Rqqlb6Vygr0bpvm5VYUl Vey5YfbNq35cOnfLJB+PtPdV
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Nov 03, 2021 at 06:34:16AM +0000, Oleksandr Andrushchenko wrote:
> Hi, Roger
> 
> On 26.10.21 14:33, Roger Pau Monné wrote:
> > On Thu, Sep 30, 2021 at 10:52:22AM +0300, Oleksandr Andrushchenko wrote:
> >> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> >> index 43b8a0817076..33033a3a8f8d 100644
> >> --- a/xen/include/xen/pci.h
> >> +++ b/xen/include/xen/pci.h
> >> @@ -137,6 +137,24 @@ struct pci_dev {
> >>       struct vpci *vpci;
> >>   };
> >>   
> >> +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
> >> +struct vpci_dev {
> >> +    struct list_head list;
> >> +    /* Physical PCI device this virtual device is connected to. */
> >> +    const struct pci_dev *pdev;
> >> +    /* Virtual SBDF of the device. */
> >> +    union {
> >> +        struct {
> >> +            uint8_t devfn;
> >> +            uint8_t bus;
> >> +            uint16_t seg;
> >> +        };
> >> +        pci_sbdf_t sbdf;
> >> +    };
> >> +    struct domain *domain;
> >> +};
> >> +#endif
> > I wonder whether this is strictly needed. Won't it be enough to store
> > the virtual (ie: guest) sbdf inside the existing vpci struct?
> >
> > It would avoid the overhead of the translation you do from pdev ->
> > vdev, and there doesn't seem to be anything relevant stored in
> > vpci_dev apart from the virtual sbdf.
> TL;DR It seems it might be needed from performance POV. If not implemented
> for every MMIO trap we use a global PCI lock, e.g. pcidevs_{lock|unlock}.
> Note: pcidevs' lock is a recursive lock
> 
> There are 2 sources of access to virtual devices:
> 1. During initialization when we add, assign or de-assign a PCI device
> 2. At run-time when we trap configuration space access and need to
> translate virtual SBDF into physical SBDF
> 3. At least de-assign can run concurrently with MMIO handlers
> 
> Now let's see which locks are in use while doing that.
> 
> 1. No struct vpci_dev is used.
> 1.1. We remove the structure and just add pdev->vpci->guest_sbdf as you 
> suggest
> 1.2. To protect virtual devices we use pcidevs_{lock|unlock}
> 1.3. Locking happens on system level
> 
> 2. struct vpci_dev is used
> 2.1. We have a per-domain lock vdev_lock
> 2.2. Locking happens on per domain level
> 
> To compare the two:
> 
> 1. Without vpci_dev
> pros: much simpler code
> pros/cons: global lock is used during MMIO handling, but it is a recursive 
> lock
> 
> 2. With vpc_dev
> pros: per-domain locking
> cons: more code
> 
> I have implemented the two methods and we need to decide
> which route we go.

We could always see about converting the pcidevs lock into a rw one if
it turns out there's too much contention. PCI config space accesses
shouldn't be that common or performance critical, so having some
contention might not be noticeable.

TBH I would start with the simpler solution (add guest_sbdf and use
pci lock) and move to something more complex once issues are
identified.

Regards, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.