[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Current PVH/HVMlite work and planning (was :Re: Discussion about virtual iommu support for Xen guest)



On Fri, 3 Jun 2016, Tian, Kevin wrote:
> > From: Roger Pau Monne [mailto:roger.pau@xxxxxxxxxx]
> > Sent: Friday, June 03, 2016 7:53 PM
> > 
> > On Fri, Jun 03, 2016 at 11:21:20AM +0000, Tian, Kevin wrote:
> > > > From: Roger Pau Monne [mailto:roger.pau@xxxxxxxxxx]
> > > > Sent: Friday, June 03, 2016 7:02 PM
> > > >
> > > > On Thu, Jun 02, 2016 at 07:58:48PM +0100, Andrew Cooper wrote:
> > > > > On 02/06/16 16:03, Lan, Tianyu wrote:
> > > > > > On 5/27/2016 4:19 PM, Lan Tianyu wrote:
> > > > > >> On 2016年05月26日 19:35, Andrew Cooper wrote:
> > > > > >>> On 26/05/16 09:29, Lan Tianyu wrote:
> > > > > >>>
> > > > > >>> To be viable going forwards, any solution must work with 
> > > > > >>> PVH/HVMLite as
> > > > > >>> much as HVM.  This alone negates qemu as a viable option.
> > > > > >>>
> > > > > >>> From a design point of view, having Xen needing to delegate to 
> > > > > >>> qemu to
> > > > > >>> inject an interrupt into a guest seems backwards.
> > > > > >>>
> > > > > >>
> > > > > >> Sorry, I am not familiar with HVMlite. HVMlite doesn't use Qemu and
> > > > > >> the qemu virtual iommu can't work for it. We have to rewrite 
> > > > > >> virtual
> > > > > >> iommu in the Xen, right?
> > > > > >>
> > > > > >>>
> > > > > >>> A whole lot of this would be easier to reason about if/when we 
> > > > > >>> get a
> > > > > >>> basic root port implementation in Xen, which is necessary for 
> > > > > >>> HVMLite,
> > > > > >>> and which will make the interaction with qemu rather more clean.  
> > > > > >>> It is
> > > > > >>> probably worth coordinating work in this area.
> > > > > >>
> > > > > >> The virtual iommu also should be under basic root port in Xen, 
> > > > > >> right?
> > > > > >>
> > > > [...]
> > > > > > What's progress of PCI host bridge in Xen? From your opinion, we 
> > > > > > should
> > > > > > do that first, right? Thanks.
> > > > >
> > > > > Very sorry for the delay.
> > > > >
> > > > > There are multiple interacting issues here.  On the one side, it would
> > > > > be useful if we could have a central point of coordination on
> > > > > PVH/HVMLite work.  Roger - as the person who last did HVMLite work,
> > > > > would you mind organising that?
> > > >
> > > > Sure. Adding Boris and Konrad.
> > > >
> > > > AFAIK, the current status is that Boris posted a RFC to provide some 
> > > > basic
> > > > ACPI tables to PVH/HVMlite guests, and I'm currently working on 
> > > > rebasing my
> > > > half-backed HVMlite Dom0 series on top of that. None of those two 
> > > > projects
> > > > require the presence of an emulated PCI root complex inside of Xen, so
> > > > there's nobody working on it ATM that I'm aware of.
> > > >
> > > > Speaking about the PVH/HVMlite roadmap, after those two items are done 
> > > > we
> > > > had plans to work on having full PCI root complex emulation inside of 
> > > > Xen,
> > > > so that we could do passthrough of PCI devices to PVH/HVMlite guests 
> > > > without
> > > > QEMU (and of course without pcifront inside of the guest). I don't 
> > > > foresee
> > > > any of us working on it for at least the next 6 months, so I think 
> > > > there's a
> > > > good chance that this can be done in parallel to the work that Boris 
> > > > and I
> > > > are doing, without any clashes. Is anyone at Intel interested in picking
> > > > this up?
> > >
> > > How stable is the HVMLite today? Is it already in production usage?
> > >
> > > Wonder whether you have some detail thought how full PCI root complex
> > > emulation will be done in Xen (including how to interact with Qemu)...
> > 
> > I haven't looked into much detail regarding all this, since as I said, it's
> > still a little bit far away in the PVH/HVMlite roadmap, we have more
> > pressing issues to solve before getting to the point of implementing
> > PCI-passthrough. I expect Xen is going to intercept all PCI accesses and is
> > then going to forward them to the ioreq servers that have been registered
> > for that specific config space, but this of course needs much more thought
> > and a proper design document.
> > 
> > > As I just wrote in another mail, if we just hit for HVM first, will it 
> > > work if
> > > we implement vIOMMU in Xen but still relies on Qemu root complex to
> > > report to the guest?
> > 
> > This seems quite inefficient IMHO (but I don't know that much about all this
> > vIOMMU stuff). If you implement vIOMMU inside of Xen, but the PCI root
> > complex is inside of Qemu aren't you going to perform quite a lot of jumps
> > between Xen and QEMU just to access the vIOMMU?
> > 
> > I expect something like:
> > 
> > Xen traps PCI access -> QEMU -> Xen vIOMMU implementation
> > 
> 
> I hope the role of Qemu is just to report vIOMMU related information, such
> as DMAR, etc. so guest can enumerate the presence of vIOMMU, while
> the actual emulation is done by vIOMMU in hypervisor w/o going through
> Qemu.
> 
> However just realized even for above purpose, there's still some interaction
> required between Qemu and Xen vIOMMU, e.g. register base of vIOMMU and
> devices behind vIOMMU are reported thru ACPI DRHD which means Xen vIOMMU
> needs to know the configuration in Qemu which might be dirty to define such
> interfaces between Qemu and hypervisor. :/

PCI accesses don't need to be particularly fast, they should not be on
the hot path.

How bad this interface between QEMU and vIOMMU in Xen would look like?
Can we make a short list of basic operations that we would need to
support to get a clearer idea?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.