[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vpci: introduce per-domain lock to protect vpci structure


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Mon, 14 Feb 2022 09:36:39 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=275EXLjnNOik2YoOQdvWDpRY4pPe4tyftRlYIU+Lsoc=; b=JOETXkEjC8m8eQkP8WGdW4n1yOYt+3saOZhcxVYHkvn1Wt1AEHQDJf9GJAefLp81lVfJPE8WwoaapiDpaNIfMkB0KYRLyUefAMdvbVnbFJsWL47QgivmkO+ye3XZ74owMH1D14OWbVKKsiWDWngxsVDWnT38tCw+i7VEogKM5+w2dpcWHkytWm3O+VTnI9RJJkGyCy+Q/EoIuLfAMnP8Hv9v6Ox0HAuMwHs9YoaRWfQ2oFfKpXHLvs2EMBg1pxDOFaxkrxgltSRfJeC7ImjKmSOb+RjJkPJbAjddvfEL5hO9Rxj9tUZwsmEt+TswEfJcfozOPyJtfuAQINzoC2vGhQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NuvyXHSmFdDS3RZMn8MhWCVqf+3O3SzyvAE+s6B/rQ7tDobEr94EYogM6h9+jHfr6d+RhFs+Yzy+Shx3gW6maPsw9pyqFsuSLMKXq4efUmXqyRkZxbIRP7BxlJpuoy9BhG1SEx/qji1BUGrMXJ38R+2WsENR+L8ti+m9XSYfugBXBgvFhaltVSpk103LKQBWUwgD+YbuexwH4Xi9JDL5W5adKUHi9rDuutWkVOO+OYmfUzHqjQQUpV4odPCBEP2sQv1Itl7aBfn4tm9u7oSYvFvKab+mkJKoWYcuLqTAUVL8bbleVbJc0IHpxSrEF3j1KzRY+WnWvBRCGwDdSpoTAQ==
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "jbeulich@xxxxxxxx" <jbeulich@xxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Delivery-date: Mon, 14 Feb 2022 09:36:54 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYHboQT3cBWYI1/EGunE7hOwop6ayM95MAgAD+qQCAAEaDAIAElIcA
  • Thread-topic: [PATCH] vpci: introduce per-domain lock to protect vpci structure


On 11.02.22 13:40, Roger Pau Monné wrote:
> +
>>>>        for ( i = 0; i < msix->max_entries; i++ )
>>>>        {
>>>>            const struct vpci_msix_entry *entry = &msix->entries[i];
>>> Since this function is now called with the per-domain rwlock read
>>> locked it's likely not appropriate to call process_pending_softirqs
>>> while holding such lock (check below).
>> You are right, as it is possible that:
>>
>> process_pending_softirqs -> vpci_process_pending -> read_lock
>>
>> Even more, vpci_process_pending may also
>>
>> read_unlock -> vpci_remove_device -> write_lock
>>
>> in its error path. So, any invocation of process_pending_softirqs
>> must not hold d->vpci_rwlock at least.
>>
>> And also we need to check that pdev->vpci was not removed
>> in between or *re-created*
>>> We will likely need to re-iterate over the list of pdevs assigned to
>>> the domain and assert that the pdev is still assigned to the same
>>> domain.
>> So, do you mean a pattern like the below should be used at all
>> places where we need to call process_pending_softirqs?
>>
>> read_unlock
>> process_pending_softirqs
>> read_lock
>> pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn);
>> if ( pdev && pdev->vpci && is_the_same_vpci(pdev->vpci) )
>> <continue processing>
> Something along those lines. You likely need to continue iterate using
> for_each_pdev.
How do we tell if pdev->vpci is the same? Jan has already brought
this question before [1] and I was about to use some ID for that purpose:
pdev->vpci->id = d->vpci_id++ and then we use pdev->vpci->id  for checks

Thank you,
Oleksandr

[1] https://www.mail-archive.com/xen-devel@xxxxxxxxxxxxxxxxxxxx/msg113790.html

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.