[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] standalone PCI passthrough emulator



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> Sent: 05 March 2019 02:45
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-devel 
> (xen-devel@xxxxxxxxxxxxxxxxxxxx) <xen-
> devel@xxxxxxxxxxxxxxxxxxxx>
> Subject: RE: standalone PCI passthrough emulator
> 
> > From: Paul Durrant [mailto:Paul.Durrant@xxxxxxxxxx]
> > Sent: Monday, March 4, 2019 4:44 PM
> >
> > > -----Original Message-----
> > > From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> > > Sent: 04 March 2019 03:01
> > > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-devel (xen-
> > devel@xxxxxxxxxxxxxxxxxxxx) <xen-
> > > devel@xxxxxxxxxxxxxxxxxxxx>
> > > Subject: RE: standalone PCI passthrough emulator
> > >
> > > > From: Paul Durrant
> > > > Sent: Saturday, March 2, 2019 12:41 AM
> > > >
> > > > Hi,
> > > >
> > > >   As the basis of some future development work I've put together a 
> > > > simple
> > > > standalone emulator to pass through a single type 0 PCI function to a
> > guest so
> > > > I'm posting here in case anyone else would like a give it a try. So far 
> > > > I've
> > tested
> > > > with AMD FirePro S7150 and NVIDIA K1 GPUs and a Windows 10 guest, so
> > it
> > > > hasn't had that much debugging.
> > >
> > > How is it different from existing PCI passthrough support in Xen? What is
> > > exactly emulated here?
> > >
> >
> > Essentially it does no more than the current code in QEMU, but that code has
> > become very complex and hard to follow over the years. It's full of magic
> > mask values and I've found at least two pieces of completely dead code 
> > whilst
> > looking at it. So, I started this work to provide a small simple base on 
> > which to
> > experiment with using VFIO, rather than the existent sysfs node accesses and
> > xenctrl calls.
> 
> Thanks for explanation. I took a quick look at current repo. Looks VFIO
> support is not added yet, correct?

Yes, as a first step I wanted to duplicate the xenctrl calls used by QEMU and 
get things going with those. Then I have a base from which I can start to 
replace things with calls into VFIO.

> To enable VFIO in Xen, I suppose there
> will be several major changes:
> 
> 1. enable your pvIOMMU driver in VFIO. and it needs to be a full-fledge
> flavor, i.e. supporting per-device remapping capability;
> 

I was thinking about this yesterday... The proposed hypercall interface needs 
to be changed I think; we should have the ability to create an IOMMU group 
(i.e. the same concept that VFIO has) for a VM, assign devices to groups, and 
then allow the guest OS to map and unmap pages in groups. Perhaps we make the 
current hardcoded Xen mappings into 'group 0' (which the guest is not allowed 
to manipulate - apart from maybe the OS in the h/w domain) and then have 
devices assigned to that by default. They can then be transferred into other 
groups by new hypercalls.

> 2. make VFIO aware of foreign pages when doing accounting at map/unmap
> operations;
> 

The hypercalls I'd already proposed should be ok for that; they take a domid 
and gfn/gref as arguments and take both page and type refs so I think we should 
be ok there.

> 3. what would Xen device passthrough look like then? Looks now it becomes
> a hybrid model with some passthrough roles delegated to Dom0 VFIO, while
> other roles like real IOMMU page table, interrupt handling, etc. are still 
> kept
> inside Xen.
> 

I guess so... not clear. I assume we need to issue any trapped I/O to the 
device via VFIO so that we can take advantage of MDEV, but we still want the 
ability for I/O to go directly for pass-through resources. I'd like to work 
towards a unified (kvm + xen) control plane via VFIO with hypervisor specific 
detail kept inside the vfio driver as far as possible.

> > To answer your other question... It's config space that is emulated, as it 
> > has to
> > be to deal with BAR address and interrupt translation. Note, there is also a
> > slight advantage in using multiple discreet emulators; emulated I/O can
> > proceed in parallel for multiple vcpus... emulation on behalf of Xen by QEMU
> > is still restricted by a single poll/select loop for all vcpus.
> >
> 
> thanks to the introduction of ioreq server. :-)

You're welcome :-)

  Paul

> 
> Thanks
> Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.