[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap accesses to the PCI config space



> -----Original Message-----
> From: Roger Pau Monne
> Sent: 24 April 2017 12:03
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; konrad.wilk@xxxxxxxxxx;
> boris.ostrovsky@xxxxxxxxxx; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Andrew Cooper
> <Andrew.Cooper3@xxxxxxxxxx>
> Subject: Re: [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap
> accesses to the PCI config space
> 
> On Mon, Apr 24, 2017 at 11:19:10AM +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne
> > > Sent: 24 April 2017 11:09
> > > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > > Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; konrad.wilk@xxxxxxxxxx;
> > > boris.ostrovsky@xxxxxxxxxx; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Wei
> Liu
> > > <wei.liu2@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Andrew
> Cooper
> > > <Andrew.Cooper3@xxxxxxxxxx>
> > > Subject: Re: [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap
> > > accesses to the PCI config space
> > >
> > > On Mon, Apr 24, 2017 at 10:34:15AM +0100, Paul Durrant wrote:
> > > > > -----Original Message-----
> > > > > From: Roger Pau Monne
> > > > > Sent: 24 April 2017 10:09
> > > > > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > > > > Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; konrad.wilk@xxxxxxxxxx;
> > > > > boris.ostrovsky@xxxxxxxxxx; Ian Jackson <Ian.Jackson@xxxxxxxxxx>;
> Wei
> > > Liu
> > > > > <wei.liu2@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Andrew
> > > Cooper
> > > > > <Andrew.Cooper3@xxxxxxxxxx>
> > > > > Subject: Re: [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap
> > > > > accesses to the PCI config space
> > > > >
> > > > > On Fri, Apr 21, 2017 at 05:07:43PM +0100, Paul Durrant wrote:
> > > > > > > -----Original Message-----
> > > > > > > From: Roger Pau Monne [mailto:roger.pau@xxxxxxxxxx]
> > > > > > > Sent: 20 April 2017 16:18
> > > > > > > To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> > > > > > > Cc: konrad.wilk@xxxxxxxxxx; boris.ostrovsky@xxxxxxxxxx; Roger
> Pau
> > > > > Monne
> > > > > > > <roger.pau@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>;
> Wei
> > > Liu
> > > > > > > <wei.liu2@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Andrew
> > > > > Cooper
> > > > > > > <Andrew.Cooper3@xxxxxxxxxx>; Paul Durrant
> > > <Paul.Durrant@xxxxxxxxxx>
> > > > > > > Subject: [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap
> > > > > accesses
> > > > > > > to the PCI config space
> > > > > > >
> > > > > > > This functionality is going to reside in vpci.c (and the
> corresponding
> > > vpci.h
> > > > > > > header), and should be arch-agnostic. The handlers introduced in
> this
> > > > > patch
> > > > > > > setup the basic functionality required in order to trap accesses 
> > > > > > > to
> the
> > > PCI
> > > > > > > config space, and allow decoding the address and finding the
> > > > > corresponding
> > > > > > > handler that should handle the access (although no handlers are
> > > > > > > implemented).
> > > > > > >
> > > > > > > Note that the traps to the PCI IO ports registers (0xcf8/0xcfc) 
> > > > > > > are
> > > setup
> > > > > > > inside of a x86 HVM file, since that's not shared with other 
> > > > > > > arches.
> > > > > > >
> > > > > > > A new XEN_X86_EMU_VPCI x86 domain flag is added in order to
> > > signal
> > > > > Xen
> > > > > > > whether
> > > > > > > a domain should use the newly introduced vPCI handlers, this is
> only
> > > > > enabled
> > > > > > > for PVH Dom0 at the moment.
> > > > > > >
> > > > > > > A very simple user-space test is also provided, so that the basic
> > > > > functionality
> > > > > > > of the vPCI traps can be asserted. This has been proven quite
> helpful
> > > > > during
> > > > > > > development, since the logic to handle partial accesses or
> accesses
> > > that
> > > > > > > expand
> > > > > > > across multiple registers is not trivial.
> > > > > > >
> > > > > > > The handlers for the registers are added to a red-black tree, that
> > > indexes
> > > > > > > them
> > > > > > > based on their offset. Since Xen needs to handle partial accesses
> to
> > > the
> > > > > > > registers and access that expand across multiple registers the 
> > > > > > > logic
> in
> > > > > > > xen_vpci_{read/write} is kind of convoluted, I've tried to 
> > > > > > > properly
> > > > > comment
> > > > > > > it
> > > > > > > in order to make it easier to understand.
> > > > > > >
> > > > > >
> > > > > > Since config space is not exactly huge, I'm wondering why you used
> an
> > > r-b
> > > > > tree rather than a direct map from register to handler?
> > > > >
> > > > > Hello,
> > > > >
> > > > > For local PCI the configuration space it's 256byte only, which means
> using
> > > 1/2
> > > > > a page (256 * 8) so that Xen can store a pointer for each possible
> register.
> > > > > The extended configuration space (ECAM) extends the space to 4K,
> > > which
> > > > > means we
> > > > > would use 8 pages per device (4096*8), I think that's too much.
> > > >
> > > > Ok, but I still think that adding an r-b tree implementation is just 
> > > > more
> > > complexity in the way that io handlers are registered in Xen.
> > >
> > > But this complexity is completely hidden inside of the io handler itself
> that
> > > traps the access to 0xcf8/cfc (or ECAM areas).
> > >
> > > Do you mean that you would like this functionality to made available to
> > > IOREQ
> > > clients also, so that they could register handlers for specific PCI 
> > > registers
> > > without owning the full configuration space of such device?
> > >
> > > > TBH, the whole thing needs a clean-up. We don't have proper range-
> based
> > > handler registration for port IO or MMIO at all (instead we potentially 
> > > call
> the
> > > 'accept' function for every handler for every I/O). We then have (IIRC) an
> > > ordered list for MSI-X BAR registrations and now you're proposing an r-b
> > > system for PCI config space.
> > >
> > > One way or another Xen needs to track handlers for the PCI config space,
> > > and
> > > currently this is not implemented inside of Xen.
> >
> > What I mean is that we should have some form of range-based IO handler
> registration framework and then that can be used for port IO, MMIO and PCI
> config space. For external config space emulation then yes of course the
> external emulated needs to claim the whole space for that SBDF, but that's
> just a degenerate case of claiming a specific range within the SBDF.
> > Thus, if Xen can steer port IO, MMIO or PCI config accesses by range then
> we can potentially use that framework to register internal emulation
> handlers or a special emulation handler that sends the requests out to an
> ioreq server.
> 
> IMHO I'm not sure Xen needs PCI register based trapping granularity. I would
> argue that whatever (IOREQ or Xen internal function) that wants to trap
> access
> to a specific PCI config device register needs to take care of all the
> registers for that device.
> 

Having distinct handers for distinct groups of makes sense though... e.g. being 
able to register a BAR handler for each BAR and then maybe an MSI-X capability 
handler for wherever that appears in the capability chain, etc. If you don't 
allow such registration at the top level then it ends up getting done at the 
next level. That said, it may make more sense to have a top level of emulation 
that just handles all register reads and writes to config space and then a 
second level that has callbacks for BAR enumeration, bus master enable, MSI-X 
mask/unmask, etc.

> I will look into hooking this code (vPCI) into the existing hvm_*_ioreq
> functionality, so that vPCI claims the full PCI config space for each device 
> it
> manages.

Cool.

  Paul

> 
> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.