[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen virtual IOMMU high level design doc





On 8/17/2016 8:42 PM, Paul Durrant wrote:
-----Original Message-----
From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of
Lan, Tianyu
Sent: 17 August 2016 13:06
To: Jan Beulich; Kevin Tian; Andrew Cooper; yang.zhang.wz@xxxxxxxxx; Jun
Nakajima; Stefano Stabellini
Cc: Anthony Perard; xuquan8@xxxxxxxxxx; xen-
devel@xxxxxxxxxxxxxxxxxxx; Ian Jackson; Roger Pau Monne
Subject: [Xen-devel] Xen virtual IOMMU high level design doc

Hi All:
      The following is our Xen vIOMMU high level design for detail
discussion. Please have a look. Very appreciate for your comments.
This design doesn't cover changes when root port is moved to hypervisor.
We may design it later.


Content:
==========================================================
=====================
1. Motivation of vIOMMU
        1.1 Enable more than 255 vcpus
        1.2 Support VFIO-based user space driver
        1.3 Support guest Shared Virtual Memory (SVM)
2. Xen vIOMMU Architecture
        2.1 2th level translation overview
        2.2 Interrupt remapping overview
3. Xen hypervisor
        3.1 New vIOMMU hypercall interface

Would it not have been better to build on the previously discussed (and mostly 
agreed) PV IOMMU interface? (See 
https://lists.xenproject.org/archives/html/xen-devel/2016-02/msg01428.html). An 
RFC implementation series was also posted 
(https://lists.xenproject.org/archives/html/xen-devel/2016-02/msg01441.html).

  Paul


Hi Paul:
Thanks for your input. Glance the patchset and it introduces hypercall
"HYPERVISOR_iommu_op". The hypercall just works for PV IOMMU now. We may
abstract it and make it work for both PV and Virtual IOMMU.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.