[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 0/9] vpci: PCI config space emulation



On Mon, May 29, 2017 at 07:38:14AM -0600, Jan Beulich wrote:
> >>> On 27.04.17 at 16:35, <roger.pau@xxxxxxxxxx> wrote:
> > The following series contain an implementation of handlers for the PCI
> > configuration space inside of Xen. This allows Xen to detect accesses to the
> > PCI configuration space and react accordingly.
> > 
> > Although there hasn't been a lot of review on the previous version, I send 
> > this
> > new version because I will be away for > 1 week, and I would rather have 
> > review
> > on this version than the old one. As usual, each patch contains a changeset
> > summary between versions.
> > 
> > Patch 1 implements the generic handlers for accesses to the PCI 
> > configuration
> > space together with a minimal user-space test harness that I've used during
> > development. Currently a per-device red-back tree is used in order to store 
> > the
> > list of handlers, and they are indexed based on their offset inside of the
> > configuration space. Patch 1 also adds the x86 port IO traps and wires them
> > into the newly introduced vPCI dispatchers. Patch 2 adds handlers for the 
> > ECAM
> > areas (as found on the MMCFG ACPI table). Patches 3 and 4 are mostly code
> > moment/refactoring in order to implement support for BAR mapping in patch 5.
> > Patch 6 allows Xen to mask certain PCI capabilities on-demand, which is 
> > used in
> > order to mask MSI and MSI-X.
> > 
> > Finally patches 8 and 9 implement support in order to emulate the MSI/MSI-X
> > capabilities inside of Xen, so that the interrupts are transparently routed 
> > to
> > the guest.
> 
> While the code looks reasonable for this early stage, it's still quite
> large a chunk of new logic.

Thanks for the detailed review, it's greatly appreciated (it's a very
large amount of code so I assume this is not trivial for you at
all).

I'm still going over the comments, I hope I will be able to send a new
version before the end of the week.

> Therefore I think that if already there
> was no prior design discussion, some reasoning behind the decisions
> you've taken should be provided here. In particular I'm quite
> unhappy about this huge amount of intercepting and emulation,
> none of which we require for PV Dom0. IOW I continue to be
> unconvinced that putting all the burden on Xen while not para-
> virtualizing any of this in the Dom0 kernel is the right choice.

IMHO, there are two main points of doing all this emulation inside of
Xen, the first one is to prevent adding a bunch of duplicated Xen PV
specific code to each OS we want to support in PVH mode. This just
promotes Xen code duplication amongst OSes, which leads to more
maintainership burden.

The second reason would be that this code (or it's functionality to be
more precise) already exists in QEMU (and pciback to a degree), and
it's code that we already support and maintain. By moving it into the
hypervisor itself every guest type can make use of it, and should be
shared between them all (I know that the code in this series is not
yet suitable for DomU HVM guests yet).

> It
> certainly would be if we were talking about HVM Dom0, but this is
> PVH, and the "PV" part is first there for a reason.

Well, I've been mostly forced into using the PVH name for historical
reasons, but when I started working on this I called it HVMlite,
because I think it's more similar to HVM than PV by a long shot (and
the PVH Dom0 builder functions were using the "hvm" prefix in the
firsts iterations of that series).

Since PVH was never finished, the PVH name was reused in order to
prevent us the shame of announcing something that was never finished,
and to prevent adding more confusion to users.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.