[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 03/14] vpci: move lock outside of struct vpci


  • To: Oleksandr Andrushchenko <andr2000@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 12 Jan 2022 15:57:36 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WlrQJaVuKbLjAc7DiuNHwT5Rir1lD4aH1Fh/EekoU2M=; b=NAo7PE1unUgRnolNbubzTMGtrizXqIPLhi2nqtNk+qzn6SNAU/7M0Aq7xTzkAfzL2lZ8Ig+9lAwenJs4UMG41dP54dUzaU9lBnuvx2byrRJ1mcHRHOc5b6fd9cske6w38Z7PCVCzY1fHIxa13JONoR5e6fIO+W9gqLtisiIxJ4AUzdQQBU8n0cBouD8xCbWwZHHVAA97UyUjzyHs1FHSqtH5iwxdOy1j88V0EH2heXkBnPNKHRW/EyJLANUfve/UBI56AM+PEhYZuWPMA1Nkb0QXZfVUXnW6yBNynVe6NJ3xnQNUc3v0mKqwzO647E0armVCiozGeRFhTFFiT2h65w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CalGANQkC1wlxvCA+G0IJxNmxpdCKCvtto+08kfGayr3XyDGqKUD/Qxtr1YbojfOXGbt8r9t8V0m6M5IvdX2DBCxJ6BqX4NlyJn5REhgafLnrXIWIPAX1TuMzcIctCspo24Pw8OfceRFndCFApY5a8duN0vfdtqkCweThvHwpcPabjo7bEfi3zXYibihafiN2syMNqS1ww1em9phnX7kqsAL82lWFIO/nvtwdsFEGOsPxl9BnSSq0eq6pl7AlL/7EPMOjzmaHnG24S1pziRAra5p+92f9zrX/4/Oav4O3HIQRQ+hSn99XpoGj7T+pz+eM4Zw78kswR19rYPztXtJIg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: julien@xxxxxxx, sstabellini@xxxxxxxxxx, oleksandr_tyshchenko@xxxxxxxx, volodymyr_babchuk@xxxxxxxx, Artem_Mygaiev@xxxxxxxx, roger.pau@xxxxxxxxxx, andrew.cooper3@xxxxxxxxxx, george.dunlap@xxxxxxxxxx, paul@xxxxxxx, bertrand.marquis@xxxxxxx, rahul.singh@xxxxxxx, Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 12 Jan 2022 14:57:55 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 25.11.2021 12:02, Oleksandr Andrushchenko wrote:
> @@ -68,12 +84,13 @@ int vpci_add_handlers(struct pci_dev *pdev)
>      /* We should not get here twice for the same device. */
>      ASSERT(!pdev->vpci);
>  
> -    pdev->vpci = xzalloc(struct vpci);
> -    if ( !pdev->vpci )
> +    vpci = xzalloc(struct vpci);
> +    if ( !vpci )
>          return -ENOMEM;
>  
> +    spin_lock(&pdev->vpci_lock);
> +    pdev->vpci = vpci;
>      INIT_LIST_HEAD(&pdev->vpci->handlers);
> -    spin_lock_init(&pdev->vpci->lock);

INIT_LIST_HEAD() can occur ahead of taking the lock, and can also act
on &vpci->handlers rather than &pdev->vpci->handlers.

>      for ( i = 0; i < NUM_VPCI_INIT; i++ )
>      {
> @@ -83,7 +100,8 @@ int vpci_add_handlers(struct pci_dev *pdev)
>      }

This loop wants to live in the locked region because you need to install
vpci into pdev->vpci up front, afaict. I wonder whether that couldn't
be relaxed, but perhaps that's an improvement that can be thought about
later.

The main reason I'm looking at this closely is because from the patch
title I didn't expect new locking regions to be introduced right here;
instead I did expect strictly a mechanical conversion.

> @@ -152,8 +170,6 @@ int vpci_add_register(struct vpci *vpci, vpci_read_t 
> *read_handler,
>      r->offset = offset;
>      r->private = data;
>  
> -    spin_lock(&vpci->lock);

>From the description I can't deduce why this lock is fine to go away
now, i.e. that all call sites have the lock now acquire earlier.
Therefore I'd expect at least an assertion to be left here ...

> @@ -183,7 +197,6 @@ int vpci_remove_register(struct vpci *vpci, unsigned int 
> offset,
>      const struct vpci_register r = { .offset = offset, .size = size };
>      struct vpci_register *rm;
>  
> -    spin_lock(&vpci->lock);

... and here.

> @@ -370,6 +386,7 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, 
> unsigned int size)
>              break;
>          ASSERT(data_offset < size);
>      }
> +    spin_unlock(&pdev->vpci_lock);
>  
>      if ( data_offset < size )
>      {
> @@ -379,7 +396,6 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, 
> unsigned int size)
>  
>          data = merge_result(data, tmp_data, size - data_offset, data_offset);
>      }
> -    spin_unlock(&pdev->vpci->lock);
>  
>      return data & (0xffffffff >> (32 - 8 * size));
>  }

Here and ...

> @@ -475,13 +498,12 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, 
> unsigned int size,
>              break;
>          ASSERT(data_offset < size);
>      }
> +    spin_unlock(&pdev->vpci_lock);
>  
>      if ( data_offset < size )
>          /* Tailing gap, write the remaining. */
>          vpci_write_hw(sbdf, reg + data_offset, size - data_offset,
>                        data >> (data_offset * 8));
> -
> -    spin_unlock(&pdev->vpci->lock);
>  }

... even more so here I'm not sure of the correctness of the moving
you do: While pdev->vpci indeed doesn't get accessed, I wonder
whether there wasn't an intention to avoid racing calls to
vpci_{read,write}_hw() this way. In any event I think such movement
would need justification in the description.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.