[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 11/11] xen/arm: Translate virtual PCI bus topology for guests


  • To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 1 Oct 2021 10:12:15 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hjTFJmy1tiJaHbXpmCxP7LXYNpizK2SLCaAHIzeMneY=; b=bkovvuayGma+YY2AijwnnqSylmL2rc1yQ+snai0FMUXABTDOeIlh1Jx8AbAQp/m4Q+dDYuDB+1H/UUrhhwve2OW2o+JwhheorAImQyWyEJ1fqi9e8T9SFqiTP9gI9UIIj+JoL4MPfmwSSQZ08gEmSqQYr/xyheYrQIcWZCiPe+sZpub4KAqiPvqOUY2zU++uSNOX585wI/f/9T5o0yj4Uw1Cy2rDtKEYLjeoIhzUcYqIOmILs6xw01yNENBD4T3EqSJr6INShFt3xycX7x6DZCiyz+JxeoNQvUVED3JPQq0bzy6F0y4l0B6512tCXSnmDnWBDHEhZkiNj6s2B8QMNw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aqexgPUOadpc4sGJZEvIRdTaZC3rQiN+/COxMEclxLFgK6LwWawWmNCZcvNpVeOHgNyi0Sl7NKSpf4G26kcKM/JVM4CealAWGKV/KAdXsMdbbR9lsEjNTQEZgXP+m9MurCSKBuI7PLGERaeg4fAdS9r6+hzjUOD6p3evR8x9gjNpLoVnuPqyZXwngPny673qvSIRpEgm4M8UA+xU2ZmfZ/yJ0v0Qa3ie3PVa8z0n2UwEFm+ghh1EOZwtDiflmKt0/VoNY1WZUndWpi6XitlLpd1OvArIZRURMUypm9pIgAU6g1zjLd18Xs4Zf7/jr6uFqD+S9r0F6VPE8bj52R93cA==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, "roger.pau@xxxxxxxxxx" <roger.pau@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 01 Oct 2021 08:12:58 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 01.10.2021 09:57, Oleksandr Andrushchenko wrote:
> 
> 
> On 01.10.21 10:42, Jan Beulich wrote:
>> On 30.09.2021 18:57, Oleksandr Andrushchenko wrote:
>>> [snip]
>>>
>>>>> +    bool found = false;
>>>>> +
>>>>> +    pcidevs_lock();
>>>>> +    list_for_each_entry ( vdev, &d->vdev_list, list )
>>>>> +    {
>>>>> +        if ( vdev->sbdf.sbdf == sbdf->sbdf )
>>>>> +        {
>>>>> +            /* Replace virtual SBDF with the physical one. */
>>>>> +            *sbdf = vdev->pdev->sbdf;
>>>>> +            found = true;
>>>>> +            break;
>>>>> +        }
>>>>> +    }
>>>>> +    pcidevs_unlock();
>>>> As per the comments on the earlier patch, locking as well as placement
>>>> may need reconsidering.
>>> I was thinking about the locking happening here.
>>> So, there are 4 sources where we need to manipulate d->vdev_list:
>>> 1. XEN_DOMCTL_assign_device
>>> 2. XEN_DOMCTL_test_assign_device
>>> 3. XEN_DOMCTL_deassign_device
>>> 4. MMIO handlers
>>> 5. Do I miss others?
>>>
>>> The first three already use pcidevs_{lock|unlock} and there it seems
>>> to be ok as those get called when PCI devices are discovered by Dom0
>>> and during guest domain creation. So, this is assumed not to happen
>>> frequently and can be accepted wrt global locking.
>>>
>>> What is more important is the fourth case, where in order to redirect
>>> configuration space access from virtual SBDF to physical SBDF we need
>>> to traverse the d->vdev_list each time the guest accesses PCI configuration
>>> space. This means that with each such access we take a BIG PCI lock...
>>>
>>> That being said, I think that we may want having a dedicated per-domain
>>> lock for d->vdev_list handling, e.g. d->vdev_lock.
>>> At the same time we may also consider that even for guests it is acceptable
>>> to use pcidevs_{lock|unlock} as this will not affect PCI memory space access
>>> and only has influence during device setup.
>>>
>>> I would love to hear your opinion on this
>> I've voiced my opinion already: Using the global lock really is an
>> abuse, which would require good justification. Hence unless there's
>> anything speaking against a per-domain lock, that's imo the only
>> suitable route to go. Nesting rules with the global lock may want
>> explicitly spelling out.
> I do understand your concern here and also support the idea that
> the less we wait for locks the better. Nevertheless, even if I introduce
> d->vdev_lock, which will obviously help MMIO traps, the rest will remain
> under pcidevs_{lock|unlock}, e.g. XEN_DOMCTL_assign_device,
> XEN_DOMCTL_test_assign_device and XEN_DOMCTL_deassign_device
> and the underlying code like vpci_{assign|deassign}_device in my case

Well, it's entirely usual that certain operations require more than one
lock.

> Anyways, I'll implement a per-domain d->vdev_lock

Thanks.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.