[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen virtual IOMMU high level design doc V3



On Mon, 21 Nov 2016, Julien Grall wrote:
> On 21/11/2016 02:21, Lan, Tianyu wrote:
> > On 11/19/2016 3:43 AM, Julien Grall wrote:
> > > On 17/11/2016 09:36, Lan Tianyu wrote:
> > Hi Julien:
> 
> Hello Lan,
> 
> >     Thanks for your input. This interface is just for virtual PCI device
> > which is called by Qemu. I am not familiar with ARM. Are there any
> > non-PCI emulated devices for arm in Qemu which need to be covered by
> > vIOMMU?
> 
> We don't use QEMU on ARM so far, so I guess it should be ok for now. ARM
> guests are very similar to hvmlite/pvh. I got confused and thought this design
> document was targeting pvh too.
> 
> BTW, in the design document you mention hvmlite/pvh. Does it mean you plan to
> bring support of vIOMMU for those guests later on?

I quickly went through the document. I don't think we should restrict
the design to only one caller: QEMU. In fact it looks like those
hypercalls, without any modifications, could be called from the
toolstack (xl/libxl) in the case of PVH guests. In other words
PVH guests might work without any addition efforts on the hypervisor
side.

And they might even work on ARM. I have a couple of suggestions to
make the hypercalls a bit more "future proof" and architecture agnostic.

Imagine a future where two vIOMMU versions are supported. We could have
a uint32_t iommu_version field to identify what version of IOMMU we are
creating (create_iommu and query_capabilities commands). This could be
useful even on Intel platforms.

Given that in the future we might support a vIOMMU that take ids other
than sbdf as input, I would change "u32 vsbdf" into the following:

  #define XENVIOMMUSPACE_vsbdf  0
  uint16_t space;
  uint64_t id;

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.