[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Passthrough ARM Design : Draft1



On Wed, 2015-06-10 at 15:21 -0400, Julien Grall wrote:
> Hi,
> 
> On 10/06/2015 08:45, Ian Campbell wrote:
> >> 4. DomU access / assignment PCI device
> >> --------------------------------------
> >> When a device is attached to a domU, provision has to be made such that
> >> it can
> >> access the MMIO space of the device and xen is able to identify the mapping
> >> between guest bdf and system bdf. Two hypercalls are introduced
> >
> > I don't think we want/need new hypercalls here, the same existing
> > hypercalls which are used on x86 should be suitable. That's
> > XEN_DOMCTL_memory_mapping from the toolstack I think.
> 
> XEN_DOMCTL_memory_mapping is done by QEMU for x86 HVM when the guest 
> (i.e hvmloader?) is writing in the PCI BAR.

What about for x86 PV? I think it is done by the toolstack there, I
don't know what pciback does with accesses to BAR registers.

> AFAIU, when the device is assigned to the guest, we don't know yet where 
> the BAR will live in the guest memory. It will be assigned by the guest 
> (I wasn't able to find if Linux is able to do it).
> 
> As the config space will trap in pciback, we would need to map the 
> physical memory to the guest from the kernel. A domain

These sorts of considerations/assumptions should be part of the document
IMHO.

> >> Xen adds the mmio space to the stage2 translation for domU. The
> >> restrction is
> >> that xen creates 1:1 mapping of the MMIO address.
> >
> > I don't think we need/want this restriction. We can define some
> > region(s) of guest memory to be an MMIO hole (by adding them to to the
> > memory map in public/arch-arm.h).
> 
> Even if we decide to choose a 1:1 mapping, this should not be exposed in 
> the hypervisor interface (see the suggested physdev_map_mmio) and let at 
> the discretion of the toolstack domain.
> 
> Beware that the 1:1 mapping doesn't fit with the current guest memory 
> layout which is pre-defined at Xen build time. So you would also have to 
> make it dynamically or decide to use the same memory layout as the host.

I am fairly strongly against using a 1:1 mapping for passthrough MMIO
devices to guests, with the knockon effects it implies without a very
strong reason why it must be the case, which should be spelled out in
detail in the document.

> > If there is a reason for this restriction/trade off then it should be
> > spelled out as part of the design document, as should other such design
> > decisions (which would include explaining where this differs from how
> > things work for x86 why they must differ).
> 
> On x86, for HVM the MMIO mapping is done by QEMU. I know that Roger is 
> working on PCI passthrough for PVH. PVH is very similar to ARM guest and 
> I expect to see a similar needs for MMIO mapping. It would be good if we 
> can come up with a common interface.

Yes.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.