[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 07/12] hvmloader: allocate MMCONFIG area in the MMIO hole + minor code refactoring



>>> On 22.03.18 at 01:31, <x1917x@xxxxxxxxx> wrote:
> On Wed, 21 Mar 2018 17:06:28 +0000
> Paul Durrant <Paul.Durrant@xxxxxxxxxx> wrote:
> [...]
>>> Well, this might work actually. Although the overall scenario will be
>>> overcomplicated a bit for _PCI_CONFIG ioreqs. Here is how it will
>>> look:
>>> 
>>> QEMU receives PCIEXBAR update -> calls the new dmop to tell Xen new
>>> MMCONFIG address/size -> Xen (re)maps MMIO trapping area -> someone
>>> is
>>> accessing this area -> Xen intercepts this MMIO access
>>> 
>>> But here's what happens next:
>>> 
>>> Xen translates MMIO access into PCI_CONFIG and sends it to DM ->
>>> DM receives _PCI_CONFIG ioreq -> DM translates BDF/addr info back to
>>> the offset in emulated MMCONFIG range -> DM calls
>>> address_space_read/write to trigger MMIO emulation
>>>   
>>
>>That would only be true of a dm that cannot handle PCI config ioreqs
>>directly.
> 
> It's just a bit problematic for xen-hvm.c (Xen ioreq processor in QEMU).
> 
> It receives these PCI conf ioreqs out of any context. To workaround
> this, existing code issues I/O to emulated CF8h/CFCh ports in order to
> allow QEMU to find their target. But we can't use the same method for
> MMCONFIG accesses -- this works for basic PCI conf space only.

I think you want to view this the other way around: No physical
device would ever get to see MMCFG accesses (or CF8/CFC port
ones). This same layering is what we should have in the
virtualized case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.