[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Discussion about virtual iommu support for Xen guest



Hi Stefano, Andrew and Jan:
Could you give us more guides here to move forward virtual iommu development? Thanks.

On 6/29/2016 11:04 AM, Tian, Kevin wrote:
From: Lan, Tianyu
Sent: Sunday, June 26, 2016 9:43 PM

On 6/8/2016 4:11 PM, Tian, Kevin wrote:
It makes sense... I thought you used this security issue against
placing vIOMMU in Qemu, which made me a bit confused earlier. :-)

We are still thinking feasibility of some staging plan, e.g. first
implementing some vIOMMU features w/o dependency on root-complex in
Xen (HVM only) and then later enabling full vIOMMU feature w/
root-complex in Xen (covering HVMLite). If we can reuse most code
between two stages while shorten time-to-market by half (e.g. from
2yr to 1yr), it's still worthy of pursuing. will report back soon
once the idea is consolidated...

Thanks Kevin


After discussion with Kevin, we draft a staging plan of implementing
vIOMMU in Xen based on Qemu host bridge. Both virtual devices and
passthough devices use one vIOMMU in Xen. Your comments are very
appreciated.

The rationale here is to separate BIOS structures from actual vIOMMU
emulation. vIOMMU will be always emulated in Xen hypervisor, regardless of
where Q35 emulation is done or whether it's HVM or HVMLite. The staging
plan is more for the BIOS structure reporting which is Q35 specific. For now
we first target Qemu Q35 emulation, with a set of vIOMMU ops introduced
as Tianyu listed below to help interact between Qemu and Xen. Later when
Xen Q35 emulation is ready, the reporting can be done in Xen.

The main limitation of this model is on DMA emulation of Qemu virtual
devices, which needs to query Xen vIOMMU for every virtual DMA. It is
possibly fine for virtual devices which are normally not for performance
critical usages. Also there may be some chance to cache some translations
within Qemu like thru ATS (may not worthy of it though...).


1. Enable Q35 support in the hvmloader.
In the real world, VTD support starts from Q35 and OS may have such
assumption that VTD only exists on the Q35 or newer platform.
Q35 support seems necessary for vIOMMU support.

In regardless of Q35 host bridge in the Qemu or Xen hypervisor,
hvmloader needs to be compatible with Q35 and build Q35 ACPI tables.

Qemu already has Q35 emulation and so the hvmloader job can start with
Qemu. When host bridge in Xen is ready, these changes also can be reused.

2. Implement vIOMMU in Xen based on Qemu host bridge.
Add a new device type "Xen iommu" in the Qemu as a wrapper of vIOMMU
hypercalls to communicate with Xen vIOMMU.

It's in charge of:
1) Query vIOMMU capability(E,G interrupt remapping, DMA translation, SVM
and so on)
2) Create vIOMMU with predefined base address of IOMMU unit regs
3) Notify hvmloader to populate related content in the ACPI DMAR
table.(Add vIOMMU info to struct hvm_info_table)
4) Deal with DMA translation request of virtual devices and return
back translated address.
5) Attach/detach hotplug device from vIOMMU


New hypercalls for vIOMMU that are also necessary when host bridge in Xen.
1) Query vIOMMU capability
2) Create vIOMMU(IOMMU unit reg base as params)
3) Virtual device's DMA translation
4) Attach/detach hotplug device from VIOMMU

We don't need 4). Hotplug device is automatically handled by the vIOMMU
with INCLUDE_ALL flag set (which should be the case if we only have one
vIOMMU in Xen). We don't need further notify this event to Xen vIOMMU.

And once we have Xen Q35 emulation in place, possibly only 3) is required
then.



All IOMMU emulations will be done in Xen
1) DMA translation
2) Interrupt remapping
3) Shared Virtual Memory (SVM)

Please let us know your thoughts. If no one has explicit objection based
on above rough idea, we'll go to write the high level design doc for more
detail discussion.

Thanks
Kevin


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.