[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/25] Xen/doc: Add Xen virtual IOMMU doc



Hi,

On 07/12/2017 04:09 AM, Lan Tianyu wrote:
On 2017年07月08日 00:08, Julien Grall wrote:
Because we now just have onE vIOMMU, all virtual interrupt will be bound
to it. If need to support mult-vIOMMU, we can add device-scope
field(sbdf array or some thing like that) in the structure and specify
what devices should be under one vIOMMU.

I am not sure to follow the argument here. Even if you have only one
vIOMMU you need to be able to do the correspondence between the virtual
MasterID (for PCI it is based on the RID) and the host MasterID.


      Sorry for later response.
      MasterID you mentioned here is sbdf, right? Binding between sbdf
and vsbdf(virtual sbdf) should be in the device pass through related
interface(e.g, xc_domain_bind_pt_irq_int() has already done such similar
thing that bind vsbdf with real interrupt of hypervisor.).

The MasterID is not the sbdf. It is an identifier based on the tuple (Hostbridge, Requester ID). The RequesterID (RID), might be the bdf of the device or something different if there is DMA aliases.

The relation between MasterID and the tuple is defined by the hardware and will be reported by the firmware tables.

      vIOMMU device model can get vsbdf when guest configure vIOMMU entry
and hypervisor can do conversion between sbdf and vsbdf. For interrupt
remapping on virtual VTD, we don't find such requirement so far and got
enough data from IOAPIC/MSI entry and interrupt remapping entry of
virtual VTD. So we don't extend pass through interface.

Well, you have to think how this could be extended in the future. This is quite important to plan head for stable ABI. Thankfully, you seem to use DOMCTL, so I guess we don't have to worry too much...

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.