[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vpci: introduce per-domain lock to protect vpci structure


  • To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 14 Feb 2022 11:34:10 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vqJWdz4JBMkBkzUw2CFYm6S3JoAdOb5DfgABmNHPOfY=; b=jnyZkzeHEHlZCzKqbTVhVewpzH6l5m7elPMX8gzCj8Ilwq24rKSdMKNVliZGa8SUPKx5FkvRvR6G4JXRmhpVl0zAZiKoLNvt9825mksNL/1HX2s9wb21ShruCnxEu4DH12ZhmUwg4oCnbASu8XDXOw5N87JlGGXAQSvkHxF3woIAHPU+r42IGmh5TFjxg2OpwiCKVhM/EpQr8E0VjOXS/Hyo1cY5ikc3OTGRfE9mgmBkIGQJDI0PQCxC8aFz9PUMCp4KIucP65SnrQnRKPcp9Q050GbiDqfe1ZMnHUmeHnJGdqoodUHS1stAQXEg1EznonSXaMK9IG6Yr5Qbq5HxMA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EwRWFQ8rsekNSXwm1it7e54XqbMRVwNemwAvMLiG7sl4isGtQ48zUZ+E2QN7Op104m+eujYdOpct7xfRczcaMA9n5N23qxZzW16RGLpyus/paKo8r85sB4ixNTvyCuflJJIssllM7kg30mhp6rdEfshVZKQ9H9w+L8sKxGyQMMb0CijaxbXFWFzzZmi8OV431ofoIuOwGMBDPREmdsonQcnDwZaLysR6eEIxHpHDUiBjA6vnCZjVHwW/f5zwaLpLZ36JqaafWC4USKI86ijX1KZc0ErZEOb/m4A09FMT6GVSSzi+F01FwIl7yLtZk0SYxdb4Jp1N7OSGXCkk2yBNwQ==
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "jbeulich@xxxxxxxx" <jbeulich@xxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>
  • Delivery-date: Mon, 14 Feb 2022 10:34:27 +0000
  • Ironport-data: A9a23:OmcUwKrHHoT5M46tCfCIG+BsUcNeBmIRYxIvgKrLsJaIsI4StFCzt garIBmFP67camr8ctEgYYiz90kPvsTQmN5rTVBv+SwzFC9D8ZuZCYyVIHmrMnLJJKUvbq7GA +byyDXkBJppJpMJjk71atANlZT4vE2xbuKU5NTsY0idfic5Dndx4f5fs7Rh2NQw24HlW1rlV e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9 DlCnZqRTgwmFLLPpN0+TUMBGB97HqFA2paSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9 9RBdmpLNErawbvrnvTrEYGAhex6RCXvFJkYtXx6iynQEN4tQIzZQrWM7thdtNs1rp4SQKqAP 5JHAdZpRCvqbRhiNFc5M7sVgd2OiV/ZI2RCkl3A8MLb5ECMlVcsgdABKuH9RNuOQslEm1eCk UjP9W/5HxIyOcSWzHyO9XfErsjLkCDgUYQeDoqE5+Vqi12ewG8UIBAOXF79qv684maXQc5SL nsx6yUnrKUs3EGzR9y7VBq9yFaetx4BX5xLEus16CmE0K+S6AGcbkAOQyRdctUguIkzTCYzy 16St9rzAHpkt7j9YWKQ8PKYoC2/PQARLHQefmkUQA0d+d7hrYovyBXVQb5LN6q4jcb8Hz3q9 BmMoDIjnLUYjcMN1KKT8EjOhnSnoZ2hZgw/6xjTX2mlxhhkf4PjbIutgXDA9upJJoudSliHv VAHltKY4eRICouC/ASPXeEMEbeB9/uDdjrGjjZS84IJrmr3vST5JMYJvW84dBwB3ts4lSHBZ m7Usgx7+5xvBVi0N/Z8XYeSB8II5P21fTj6bcz8Yt1La5l3UQaI+iByeEKdt1zQfFgQfbIXY snCL5v1ZZoOIeE+lWftGb9BuVM+7n1mnQvuqYbHIwNLOFZ0TFqcUv87PVSHdYjVB4vU8VyOo 76z2yZnoiizsdESgAGKq+b/znhQdBDX4KwaTOQNKIa+zvJOQj1JNhMo6epJl3ZZt6pUjPzU2 Xq2R1VVzlHy7VWed1nWNSAzNe63BM0lxZ7eAcDLFQz2s0XPnK71tPtPH3fJVeVPGBNfIQ5cE KBeJpTo7gVnQTXb4TUNBaQRX6Q5HClHcTmmZnL/CBBmJsYIb1WQprfMI1u+nAFTX3HfnZZv/ NWdOvbzHMNrq/JKV52NNppCDjqZ4BAgpQ6FdxWUc4cDIB22qOCH6UXZ15cKHi3FEj2arhOy3 AeKGxYI4+7Lpo4+6t7Sgq6Y6YyuFoND8oByRgE3NJ66anvX+HSN24hFXLradDzRTjqsqq6je f9U37f3N/hexARGtI91ErBKy6Mi5oSw++8Gn1o8RHibPU62Dr5AI2Wd2ZUdvKN62bIE6xC9X ViC+4cGNOzRat/lClMYOCEscv+HiaMPgjDX4PlseBf66SZ78aCpS0JXOxXQ2iVRIKEsaNEuw Ps7ud5Q4Au600J4PtGDhyFS1mKNMn1fDPl36sBEWNfm01N5xEtDbJrQDj7NzKuOM9gcYFM3J jK0hbbZg+gOzET1bHduR2PG2vBQhMpStUkSnkMCPVmAhvHMmuQzgE9K6T0yQwlYkkdH3uZ0N jQ5PkF5P/zTrTJhhcwFVGGwAQBRQhae/xWpmVcOkWTYSWiuV3DMczJhabrcohhB/jIOZCVf8 ZGZ1H3hAGTjc8zG1ycvXVJo9q74Rttr+wyewM2qEqxpxXXhjeYJVkN2WVc1lg==
  • Ironport-hdrordr: A9a23:wCjSlqt8Fh3CFSHPHxzhSYfs7skDjNV00zEX/kB9WHVpm6yj+v xGUs566faUskd0ZJhEo7q90ca7Lk80maQa3WBzB8bGYOCFghrKEGgK1+KLrwEIcxeUygc379 YDT0ERMrzN5VgRt7eG3OG7eexQvOVuJsqT9JjjJ3QGd3AVV0l5hT0JbTpyiidNNXJ77ZxSLu v72uN34wCOVF4wdcqBCnwMT4H41qf2fMKPW29+O/Y/gjP+9Q+V1A==
  • Ironport-sdr: 76OQfkmBvS5FJB+kS8cd7h1xjaWZH2B5lLKGydugqQMieh+tJNjpU6gKw9sdvr7FEakNP2kOp8 G9+rE8p5kf2++H/6+40M0ipWRUzj+xSdMulF+x+Ox5+Z6eJGT5kte7wtazeG/e72FMMNxhgQRD phqdpz92aG053z9uTPIVAVzvWxSiwyCo60V4q28BqQ/sG4WEaUTc3giL6rkqHH9rHk6NvEPEvW k899bwDERwA19oGsV/5IQwI3Hvsk1rrjUlY1oL8cQEsI+y3iplPtMULxL1cvL9oDxYQZq28GSH bog4CRUssZZgzpAxM6UMaC1T
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Feb 14, 2022 at 09:36:39AM +0000, Oleksandr Andrushchenko wrote:
> 
> 
> On 11.02.22 13:40, Roger Pau Monné wrote:
> > +
> >>>>        for ( i = 0; i < msix->max_entries; i++ )
> >>>>        {
> >>>>            const struct vpci_msix_entry *entry = &msix->entries[i];
> >>> Since this function is now called with the per-domain rwlock read
> >>> locked it's likely not appropriate to call process_pending_softirqs
> >>> while holding such lock (check below).
> >> You are right, as it is possible that:
> >>
> >> process_pending_softirqs -> vpci_process_pending -> read_lock
> >>
> >> Even more, vpci_process_pending may also
> >>
> >> read_unlock -> vpci_remove_device -> write_lock
> >>
> >> in its error path. So, any invocation of process_pending_softirqs
> >> must not hold d->vpci_rwlock at least.
> >>
> >> And also we need to check that pdev->vpci was not removed
> >> in between or *re-created*
> >>> We will likely need to re-iterate over the list of pdevs assigned to
> >>> the domain and assert that the pdev is still assigned to the same
> >>> domain.
> >> So, do you mean a pattern like the below should be used at all
> >> places where we need to call process_pending_softirqs?
> >>
> >> read_unlock
> >> process_pending_softirqs
> >> read_lock
> >> pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn);
> >> if ( pdev && pdev->vpci && is_the_same_vpci(pdev->vpci) )
> >> <continue processing>
> > Something along those lines. You likely need to continue iterate using
> > for_each_pdev.
> How do we tell if pdev->vpci is the same? Jan has already brought
> this question before [1] and I was about to use some ID for that purpose:
> pdev->vpci->id = d->vpci_id++ and then we use pdev->vpci->id  for checks

Given this is a debug message I would be OK with just doing the
minimal checks to prevent Xen from crashing (ie: pdev->vpci exists)
and that the resume MSI entry is not past the current limit. Otherwise
just print a message and move on to the next device.

The recreating of pdev->vpci only occurs as a result of some admin
operations, and doing it while also trying to print the current MSI
status is not a reliable approach. So dumping an incomplete or
incoherent state as a result of ongoing admin operations would be
fine.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.