[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 07/12] hvmloader: allocate MMCONFIG area in the MMIO hole + minor code refactoring



On Wed, 21 Mar 2018 09:36:04 +0000
Paul Durrant <Paul.Durrant@xxxxxxxxxx> wrote:
>> 
>> Although this is the most common scenario, it's not the only one
>> supported by Xen. Your proposed solution breaks the usage of multiple
>> IOREQ servers as PCI device emulators.
>
>Indeed it will, and that is not acceptable even in the short term.

Hmm, what exactly you are rejecting? QEMU's usage of established (and
provided by Xen) interfaces for QEMU to use? Any particular reason why
QEMU can use map_io_range_to_ioreq_server() in one case and can't in
another? It's API available for QEMU after all.

If we actually switch to the emulated MMCONFIG range informing approach
for Xen (via a new dmop/hypercall), who should prevent QEMU to actually
map this range via map_io_range_to_ioreq_server? QEMU itself? Or Xen?
How to will look, "QEMU asks us to map this range as emulated MMIO, but
he previously told us that emulated PCIEXBAR register points there, so
we won't allow him to do it"?

>> > I think it will be safe to use MMCONFIG emulation on MMIO level
>> > for now and later extend it with 'set_mmconfig_' dmop/hypercall
>> > for the 'multiple device emulators' IOREQ_TYPE_COPY routing to
>> > work same as for PCI conf, so it can be used by XenGT etc on Q35
>> > as well.  
>Introducing known breakage is not really on, particularly when it can
>be avoided with a reasonable amount of extra work.

It's hard to break something which doesn't exist. :) Multiple device
emulators feature do not support translation/routing of MMCONFIG MMIO
accesses currently, it must be designed first.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.