[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vpci: introduce per-domain lock to protect vpci structure


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Fri, 11 Feb 2022 08:46:59 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yeuJjIm0ppIE4mbB5Upuc8Cu1bqR0aTx12VuRlgGjBw=; b=mhg1lGSsmVQLu+tMq/SbhcebpM5K3Lcaa/REQfCyIPyjbXnHi9/hJcavprKBcdUkXfvXpwrDZXt0q35MJUla9MpJp8l9Vd1YbcuqA0BoFRo3vA1g28EwjEqLxbjQoIKLTLnJKi4DAzmjk8Uf5XxHJtyLeGmScm55jLooahhe6rCTaDFXaULHkPEN5yTFZ40Ow7+Z+iq4SA/5Nn4qm4AsoDPjq/OCkHz+hM/BuINes3dWIdDawAc2AdoKFAlbCkGrYcXyFONXS/BgmaKLTmPVAItaLvOg6O3bXP0Gm0qlIdg1kyIPov80uYQwqZao3ly3l56ZO1eah1qR6b2nuljjsw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PwjEsBHv2ZQmmPTTpZ4kLZbmP3e5L2TIZIppjgePbL2+POe43OfVifDBBJCYF2FhvWSue3eSEao7TfPF6vkrfNrK+R+FS6FKYXtQqiJ2NG2GfRpN5Dx9aoEFjDK2BsS5hhi5u2MJ38kPq0ETdI6DlnnJaUjCdLv8GSA5IyKCO4cArt4vmuZJ3x9siLeAKIBFRcC/IobswpQsHe6cbGkmkfbwyZDkyyTt5AraZolbmRwkbExOqOPh0Q72TN32BELkzCCXUj1QRYSWznyEVIaQ9uB1TNQgu3lDNcjyj+d6esJTgczeUksfnkbICBp5fuzRCLNUKcFNtomeJdyhCttUog==
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "jbeulich@xxxxxxxx" <jbeulich@xxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Delivery-date: Fri, 11 Feb 2022 08:47:13 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYHboQT3cBWYI1/EGunE7hOwop6ayM95MAgAEU1AA=
  • Thread-topic: [PATCH] vpci: introduce per-domain lock to protect vpci structure


On 10.02.22 18:16, Roger Pau Monné wrote:
> On Wed, Feb 09, 2022 at 03:36:27PM +0200, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
>>
>> Introduce a per-domain read/write lock to check whether vpci is present,
>> so we are sure there are no accesses to the contents of the vpci struct
>> if not. This lock can be used (and in a few cases is used right away)
>> so that vpci removal can be performed while holding the lock in write
>> mode. Previously such removal could race with vpci_read for example.
> Sadly there's still a race in the usage of pci_get_pdev_by_domain wrt
> pci_remove_device, and likely when vPCI gets also used in
> {de}assign_device I think.
>
How about the below? It seems to guarantee that we can access pdev
without issues and without requiring pcidevs_lock to be used?

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index e8b09d77d880..fd464a58b3b3 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -937,8 +937,14 @@ static int deassign_device(struct domain *d, uint16_t seg, 
uint8_t bus,
      }

      devfn = pdev->devfn;
+#ifdef CONFIG_HAS_VPCI
+    write_lock(&d->vpci_rwlock);
+#endif
      ret = iommu_call(hd->platform_ops, reassign_device, d, target, devfn,
                       pci_to_dev(pdev));
+#ifdef CONFIG_HAS_VPCI
+    write_unlock(&d->vpci_rwlock);
+#endif
      if ( ret )
          goto out;

@@ -1474,6 +1480,9 @@ static int assign_device(struct domain *d, u16 seg, u8 
bus, u8 devfn, u32 flag)
      const struct domain_iommu *hd = dom_iommu(d);
      struct pci_dev *pdev;
      int rc = 0;
+#ifdef CONFIG_HAS_VPCI
+    struct domain *old_d;
+#endif

      if ( !is_iommu_enabled(d) )
          return 0;
@@ -1487,15 +1496,34 @@ static int assign_device(struct domain *d, u16 seg, u8 
bus, u8 devfn, u32 flag)
      ASSERT(pdev && (pdev->domain == hardware_domain ||
                      pdev->domain == dom_io));

+#ifdef CONFIG_HAS_VPCI
+    /* pdev->domain is either hwdom or dom_io. We do not want the later. */
+    old_d = pdev->domain == hardware_domain ? pdev->domain : NULL;
+    if ( old_d )
+        write_lock(&old_d->vpci_rwlock);
+#endif
+
      rc = pdev_msix_assign(d, pdev);
      if ( rc )
+    {
+#ifdef CONFIG_HAS_VPCI
+        if ( old_d )
+            write_unlock(&old_d->vpci_rwlock);
+#endif
          goto done;
+    }

      pdev->fault.count = 0;

      if ( (rc = iommu_call(hd->platform_ops, assign_device, d, devfn,
                            pci_to_dev(pdev), flag)) )
+    {
+#ifdef CONFIG_HAS_VPCI
+        if ( old_d )
+            write_unlock(&old_d->vpci_rwlock);
+#endif
          goto done;
+    }

      for ( ; pdev->phantom_stride; rc = 0 )
      {

I think we don't care about pci_remove_device because:

int pci_remove_device(u16 seg, u8 bus, u8 devfn)
{
[snip]
     pcidevs_lock();
     list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
         if ( pdev->bus == bus && pdev->devfn == devfn )
         {
             vpci_remove_device(pdev);

as vpci_remove_device will take care there are no readers and
will safely remove vpci.

Thank you,
Oleksandr

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.