[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Passthrough ARM Design : Draft1



On Fri, 2015-06-26 at 14:20 +0530, Manish Jaggi wrote:
> 
> On Friday 26 June 2015 01:02 PM, Ian Campbell wrote:
> > On Fri, 2015-06-26 at 07:37 +0530, Manish Jaggi wrote:
> >> On Thursday 25 June 2015 10:56 PM, Konrad Rzeszutek Wilk wrote:
> >>> On Thu, Jun 25, 2015 at 01:21:28PM +0100, Ian Campbell wrote:
> >>>> On Thu, 2015-06-25 at 17:29 +0530, Manish Jaggi wrote:
> >>>>> On Thursday 25 June 2015 02:41 PM, Ian Campbell wrote:
> >>>>>> On Thu, 2015-06-25 at 13:14 +0530, Manish Jaggi wrote:
> >>>>>>> On Wednesday 17 June 2015 07:59 PM, Ian Campbell wrote:
> >>>>>>>> On Wed, 2015-06-17 at 07:14 -0700, Manish Jaggi wrote:
> >>>>>>>>> On Wednesday 17 June 2015 06:43 AM, Ian Campbell wrote:
> >>>>>>>>>> On Wed, 2015-06-17 at 13:58 +0100, Stefano Stabellini wrote:
> >>>>>>>>>>> Yes, pciback is already capable of doing that, see
> >>>>>>>>>>> drivers/xen/xen-pciback/conf_space.c
> >>>>>>>>>>>
> >>>>>>>>>>>> I am not sure if the pci-back driver can query the guest memory 
> >>>>>>>>>>>> map. Is there an existing hypercall ?
> >>>>>>>>>>> No, that is missing.  I think it would be OK for the virtual BAR 
> >>>>>>>>>>> to be
> >>>>>>>>>>> initialized to the same value as the physical BAR.  But I would 
> >>>>>>>>>>> let the
> >>>>>>>>>>> guest change the virtual BAR address and map the MMIO region 
> >>>>>>>>>>> wherever it
> >>>>>>>>>>> wants in the guest physical address space with
> >>>>>>>>>>> XENMEM_add_to_physmap_range.
> >>>>>>>>>> I disagree, given that we've apparently survived for years with 
> >>>>>>>>>> x86 PV
> >>>>>>>>>> guests not being able to right to the BARs I think it would be far
> >>>>>>>>>> simpler to extend this to ARM and x86 PVH too than to allow guests 
> >>>>>>>>>> to
> >>>>>>>>>> start writing BARs which has various complex questions around it.
> >>>>>>>>>> All that's needed is for the toolstack to set everything up and 
> >>>>>>>>>> write
> >>>>>>>>>> some new xenstore nodes in the per-device directory with the BAR
> >>>>>>>>>> address/size.
> >>>>>>>>>>
> >>>>>>>>>> Also most guests apparently don't reassign the PCI bus by default, 
> >>>>>>>>>> so
> >>>>>>>>>> using a 1:1 by default and allowing it to be changed would require
> >>>>>>>>>> modifying the guests to reasssign. Easy on Linux, but I don't know 
> >>>>>>>>>> about
> >>>>>>>>>> others and I imagine some OSes (especially simpler/embedded ones) 
> >>>>>>>>>> are
> >>>>>>>>>> assuming the firmware sets up something sane by default.
> >>>>>>>>> Does the Flow below captures all points
> >>>>>>>>> a) When assigning a device to domU, toolstack creates a node in per
> >>>>>>>>> device directory with virtual BAR address/size
> >>>>>>>>>
> >>>>>>>>> Option1:
> >>>>>>>>> b) toolstack using some hypercall ask xen to create p2m mapping {
> >>>>>>>>> virtual BAR : physical BAR } for domU
> >>>>>>> While implementing I think rather than the toolstack, pciback driver 
> >>>>>>> in
> >>>>>>> dom0 can send the
> >>>>>>> hypercall by to map the physical bar to virtual bar.
> >>>>>>> Thus no xenstore entry is required for BARs.
> >>>>>> pciback doesn't (and shouldn't) have sufficient knowledge of the guest
> >>>>>> address space layout to determine what the virtual BAR should be. The
> >>>>>> toolstack is the right place for that decision to be made.
> >>>>> Yes, the point is the pciback driver reads the physical BAR regions on
> >>>>> request from domU.
> >>>>> So it sends a hypercall to map the physical bars into stage2 translation
> >>>>> for the domU through xen.
> >>>>> Xen would use the holes left in IPA for MMIO.
> >>>> I still think it is the toolstack which should do this, that's whewre
> >>>> these sorts of layout decisions belong.
> >> can the xl tools read pci conf space ?
> > Yes, via sysfs (possibly abstracted via libpci) . Just like lspci and
> > friends do.
> >
> >> Using some xen hypercall or a xl-dom0 ioctl ?
> > No, using normal pre-existing Linux functionality.
> >
> >> If not then there is no otherway but xenpciback
> >>
> >> Also I need to introduce a hypercall which would tell toolkit the
> >> available holes for virtualBAR mapping.
> >> Much simpler is let xen allocate a virtualBAR and return to the caller.
> >>> At init - sure. But when the guest is running and doing those sort
> >>> of things. Unless you want guest -> pciback -> xenstore -> libxl ->
> >>> hypercall -> send ack on xenstore -> pciback -> guest.
> >>>
> >>> That would entail adding some pcibkack -> user-space tickle mechanism
> >>> and another back. Much simpler to do all of this in xenpciback I think?
> >> I agree. If the xenpciback sends a hypercall whenever a BAR read access,
> >> the mapping
> >> in xen would already have been done, so xen would simply be doing
> >> PA->IPA lookup.
> >> No xenstore lookup is required.
> > The xenstore read would happen once on device attach, at the same time
> > you are reading the rest of the dev-NNN stuff relating to the just
> > attached device.
> >
> > Doing a xenstore transaction on every BAR read would indeed be silly and
> > doing a hypercall would not be much better. There is no need for either
> > a xenstore read or a hypercall during the cfg space access itself, you
> > just read the value from a pciback datastructure.
> >
> > Add to that the fact that any new hypercall made from dom0 needs to be
> > added as a stable interface I can't see any reason to go with such a
> > model.
> I think you are overlooking a point which is "From what region the 
> virtual BAR be allocated ?"
> One way is for xen to keep a hole for domains where the bar regions be 
> mapped. This is not there as of now.
> 
> How would the tools know about this hole ?

I think you've overlooked the point that _only_ the tools know enough
about the overall guest address space layout to know about this hole.
Xen has no need to know anything about an MMIO hole, it is just told by
the toolstack what MMIO to map where.

The guest memory space layout is defined in
xen/include/public/arch-arm.h and while Xen is aware of some aspects
(e.g. the vgic addresses) it is the tools which are in charge of what
goes where in general (i.e. the tools place the RAM, load the kernel,
decide where the ramdisk and dtb should go etc etc).

> A domctl is required ?
> For this reason I was suggesting a hypercall to xen to map the physical 
> BARs and return the virtualBARs.

An MMIO region should be defined in arch-arm.h, the tools can then
assign bits of it to the BARs of devices being passed through and tell
pciback (via xenstore nodes written alongside the existing ones) and add
p2m mappings using xc_domain_memory_mapping.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.