[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 11/11] xen/arm: Translate virtual PCI bus topology for guests


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Mon, 27 Sep 2021 13:43:39 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=OwstH2iTl88Yqp1+7Q5OgdCSPwcuaZstrSlZ+MRXWhM=; b=ZFR6BlnLC9O3eOG6Zo0sWGEOmRpuhY+upmnvj6+s+JaC8zjCpuXrAyyprq2HOgMbAshr7E9UmqXgZYwG9DyyG97oKPxQpDzjt6Mfh8O+2Q4oWIY8o9JQPAfv2qCc+XxlMVkSElhPLm8/CLP44vSKXPhCvPqqMlz405m37eKUPWqnOg9JY5ZemPiN+ep6sS7xLowF1bh9zqfUhi5gQqxUR7SxrZ3iYToH9ZaPFqj0Re2mq8YDn2+UsGdaftiBo+1M899lkQZQbaABAnm7vZJuIm1QlM1jx3g+xsI3vQS2jc2S1pmH0sXYSgkubCcwoL0WWIS0IL7UzSeQ0VVT5Jc68w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Az1jILi20w4NZsxRs6zJusxvIk5Ad8Vhu4oBiHeIEHYFkvCe2wuTE2fc2WrVLIuq8Mc+kvy7BttUMssq0OUGZ3a2qByMkz/aMLsgcV07sHwTOyLsXqh1xxpwA+MVbOCEz84l6owe1xwPUfldVBE+XGmfJjpe5kb5MvihQ5Th1YNlZ3xYTpzkcAlOpe3loWVweF7dvQ2WjTeBWofuHKwwf/P91kci9Qiu3qqWwA6i/UNGNfYW6YZmQjcnKvUy8Ay4AXiR4NFKXmd3DR2Fw3vb3Yeigh0KTM6gV5CnyQh1rr1SZKieLA0nvuzrzRT8xA9EZdcufarf8kjcmf+X2Ak2Dg==
  • Authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
  • Cc: "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, "roger.pau@xxxxxxxxxx" <roger.pau@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Delivery-date: Mon, 27 Sep 2021 13:43:50 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXsHpEqeYVqdD8fU+X35KPuXAnA6u3xY6AgAAKMQCAABgZgIAAAowA
  • Thread-topic: [PATCH v2 11/11] xen/arm: Translate virtual PCI bus topology for guests

On 27.09.21 16:34, Jan Beulich wrote:
> On 27.09.2021 14:08, Oleksandr Andrushchenko wrote:
>> On 27.09.21 14:31, Jan Beulich wrote:
>>> On 23.09.2021 14:55, Oleksandr Andrushchenko wrote:
>>>> --- a/xen/drivers/passthrough/pci.c
>>>> +++ b/xen/drivers/passthrough/pci.c
>>>> @@ -890,6 +890,31 @@ int pci_remove_virtual_device(struct domain *d, const 
>>>> struct pci_dev *pdev)
>>>>        return 0;
>>>>    }
>>>>    
>>>> +/*
>>>> + * Find the physical device which is mapped to the virtual device
>>>> + * and translate virtual SBDF to the physical one.
>>>> + */
>>>> +bool pci_translate_virtual_device(struct vcpu *v, pci_sbdf_t *sbdf)
>>> Why struct vcpu, when you only need ...
>>>
>>>> +{
>>>> +    struct domain *d = v->domain;
>>> ... this? It's also not really logical for this function to take a
>>> struct vcpu, as the translation should be uniform within a domain.
>> Agree, struct domain is just enough
>>> Also - const please (as said elsewhere before, ideally wherever possible
>>> and sensible).
>> Ok
>>>> +    struct vpci_dev *vdev;
>>>> +    bool found = false;
>>>> +
>>>> +    pcidevs_lock();
>>>> +    list_for_each_entry ( vdev, &d->vdev_list, list )
>>>> +    {
>>>> +        if ( vdev->sbdf.sbdf == sbdf->sbdf )
>>>> +        {
>>>> +            /* Replace virtual SBDF with the physical one. */
>>>> +            *sbdf = vdev->pdev->sbdf;
>>>> +            found = true;
>>>> +            break;
>>>> +        }
>>>> +    }
>>> For a DomU with just one or at most a couple of devices, such a brute
>>> force lookup may be fine. What about Dom0 though? The physical topology
>>> gets split at the segment level, so maybe this would by a reasonable
>>> granularity here as well?
>> Not sure I am following why topology matters here. We are just trying to
>> match one SBDF (as seen by the guest) to other SBDF (physical,
>> as seen by Dom0), so we can proxy DomU's configuration space access
>> to the proper device in Dom0.
> Topology here matters only in so far as I've suggested to have separate
> lists per segment, to reduce look times. Other methods of avoiding a
> fully linear search are of course possible as well.

Ah, with that that respect then of course. But let's be realistic.

How many PCI devices are normally passed through to a guest?

I can assume this is probably less than 10 most of the time.

By assuming that the number of devices is small I see no profit,

but unneeded complexity in accounting virtual devices per segment

and performing the relevant lookup. So, I would go with a single list

and "brute force lookup" unless it is clearly seen that this needs to be

optimized.

>
>>>> +    pcidevs_unlock();
>>>> +    return found;
>>> Nit: Blank line please ahead of the main "return" of a function.
>> Sure
>>>> +}
>>>> +
>>>>    /* Caller should hold the pcidevs_lock */
>>>>    static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>>>>                               uint8_t devfn)
>>> Seeing this function in context (which patch 2 adds without any #ifdef
>>> around it afaics),
>> I believe you are talking about vpci_deassign_device here
>> vpci_{assign|deassign}_device seem to be not called on x86 PVH as of now,
>> this is true.
>>
>>>    will this new function needlessly be built on x86 as
>>> well?
>> It will at the moment. But in order to avoid ifdefery I would like
>> to still implement it as an empty function for x86.
>>
>>>    (I didn't look at other intermediate patches yet, so please
>>> forgive if I've missed the addition of an #ifdef.)
>> So, I can gate this with HAS_VPCI_GUEST_SUPPORT in patch 2
>> (HAS_VPCI_GUEST_SUPPORT is introduced in patch 4, so I'll move it to 2)
>> Does this sound good?
> Yes.
I'll see how I can get rid of the code that x86 doesn't use
> Jan
>
Thank you,

Oleksandr

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.