[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 06/10] vpci: Make every domain handle its own BARs



On 07.12.2020 10:11, Oleksandr Andrushchenko wrote:
> On 12/7/20 10:48 AM, Jan Beulich wrote:
>> On 04.12.2020 15:38, Oleksandr Andrushchenko wrote:
>>> On 11/13/20 4:51 PM, Jan Beulich wrote:
>>>> On 13.11.2020 15:44, Oleksandr Andrushchenko wrote:
>>>>> On 11/13/20 4:38 PM, Jan Beulich wrote:
>>>>>> On 13.11.2020 15:32, Oleksandr Andrushchenko wrote:
>>>>>>> On 11/13/20 4:23 PM, Jan Beulich wrote:
>>>>>>>>      Earlier on I didn't say you should get this to work, only
>>>>>>>> that I think the general logic around what you add shouldn't make
>>>>>>>> things more arch specific than they really should be. That said,
>>>>>>>> something similar to the above should still be doable on x86,
>>>>>>>> utilizing struct pci_seg's bus2bridge[]. There ought to be
>>>>>>>> DEV_TYPE_PCI_HOST_BRIDGE entries there, albeit a number of them
>>>>>>>> (provided by the CPUs themselves rather than the chipset) aren't
>>>>>>>> really host bridges for the purposes you're after.
>>>>>>> You mean I can still use DEV_TYPE_PCI_HOST_BRIDGE as a marker
>>>>>>>
>>>>>>> while trying to detect what I need?
>>>>>> I'm afraid I don't understand what marker you're thinking about
>>>>>> here.
>>>>> I mean that when I go over bus2bridge entries, should I only work with
>>>>>
>>>>> those who have DEV_TYPE_PCI_HOST_BRIDGE?
>>>> Well, if you're after host bridges - yes.
>>>>
>>>> Jan
>>> So, I started looking at the bus2bridge and if it can be used for both x86 
>>> (and possible ARM) and I have an
>>>
>>> impression that the original purpose of this was to identify the devices 
>>> which x86 IOMMU should
>>>
>>> cover: e.g. I am looking at the find_upstream_bridge users and they are x86 
>>> IOMMU and VGA driver.
>>>
>>> I tried to play with this on ARM, and for the HW platform I have and QEMU I 
>>> got 0 entries in bus2bridge...
>>>
>>> This is because of how xen/drivers/passthrough/pci.c:alloc_pdev is 
>>> implemented (which lives in the
>>>
>>> common code BTW, but seems to be x86 specific): so, it skips setting up 
>>> bus2bridge entries for the bridges I have.
>> I'm curious to learn what's x86-specific here. I also can't deduce
>> why bus2bridge setup would be skipped.
> 
> So, for example:
> 
> commit 0af438757d455f8eb6b5a6ae9a990ae245f230fd
> Author: Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>
> Date:   Fri Sep 27 10:11:49 2013 +0200
> 
>      AMD IOMMU: fix Dom0 device setup failure for host bridges
> 
>      The host bridge device (i.e. 0x18 for AMD) does not require IOMMU, and
>      therefore is not included in the IVRS. The current logic tries to map
>      all PCI devices to an IOMMU. In this case, "xl dmesg" shows the
>      following message on AMD sytem.
> 
>      (XEN) setup 0000:00:18.0 for d0 failed (-19)
>      (XEN) setup 0000:00:18.1 for d0 failed (-19)
>      (XEN) setup 0000:00:18.2 for d0 failed (-19)
>      (XEN) setup 0000:00:18.3 for d0 failed (-19)
>      (XEN) setup 0000:00:18.4 for d0 failed (-19)
>      (XEN) setup 0000:00:18.5 for d0 failed (-19)
> 
>      This patch adds a new device type (i.e. DEV_TYPE_PCI_HOST_BRIDGE) which
>      corresponds to PCI class code 0x06 and sub-class 0x00. Then, it uses
>      this new type to filter when trying to map device to IOMMU.
> 
> One of my test systems has DEV_TYPE_PCI_HOST_BRIDGE, so bus2brdige setup is 
> ignored

If there's data to be sensibly recorded for host bridges, I don't
see why the function couldn't be updated. I don't view this as
x86-specific; it may just be that on x86 we have no present use
for such data. It may in turn be the case that then x86-specific
call sites consuming this data need updating to not be mislead by
the change in what data gets recorded. But that's still all within
the scope of bringing intended-to-be-arch-independent code closer
to actually being arch-independent.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.