[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] One question about the hypercall to translate gfn to mfn.



> From: Paul Durrant [mailto:Paul.Durrant@xxxxxxxxxx]
> Sent: Tuesday, December 09, 2014 7:44 PM
> 
> > -----Original Message-----
> > From: Ian Campbell
> > Sent: 09 December 2014 11:29
> > To: Paul Durrant
> > Cc: Tim (Xen.org); Yu, Zhang; Kevin Tian; Keir (Xen.org); JBeulich@xxxxxxxx;
> > Xen-devel@xxxxxxxxxxxxx
> > Subject: Re: [Xen-devel] One question about the hypercall to translate gfn 
> > to
> > mfn.
> >
> > On Tue, 2014-12-09 at 11:17 +0000, Paul Durrant wrote:
> > > > -----Original Message-----
> > > > From: Ian Campbell
> > > > Sent: 09 December 2014 11:11
> > > > To: Paul Durrant
> > > > Cc: Tim (Xen.org); Yu, Zhang; Kevin Tian; Keir (Xen.org);
> > JBeulich@xxxxxxxx;
> > > > Xen-devel@xxxxxxxxxxxxx
> > > > Subject: Re: [Xen-devel] One question about the hypercall to translate
> > gfn to
> > > > mfn.
> > > >
> > > > On Tue, 2014-12-09 at 11:05 +0000, Paul Durrant wrote:
> > > > > > -----Original Message-----
> > > > > > From: Tim Deegan [mailto:tim@xxxxxxx]
> > > > > > Sent: 09 December 2014 10:47
> > > > > > To: Yu, Zhang
> > > > > > Cc: Paul Durrant; Keir (Xen.org); JBeulich@xxxxxxxx; Kevin Tian; 
> > > > > > Xen-
> > > > > > devel@xxxxxxxxxxxxx
> > > > > > Subject: Re: One question about the hypercall to translate gfn to 
> > > > > > mfn.
> > > > > >
> > > > > > At 18:10 +0800 on 09 Dec (1418145055), Yu, Zhang wrote:
> > > > > > > Hi all,
> > > > > > >
> > > > > > >    As you can see, we are pushing our XenGT patches to the
> > upstream.
> > > > One
> > > > > > > feature we need in xen is to translate guests' gfn to mfn in XenGT
> > dom0
> > > > > > > device model.
> > > > > > >
> > > > > > >    Here we may have 2 similar solutions:
> > > > > > >    1> Paul told me(and thank you, Paul :)) that there used to be a
> > > > > > > hypercall, XENMEM_translate_gpfn_list, which was removed by
> > Keir in
> > > > > > > commit 2d2f7977a052e655db6748be5dabf5a58f5c5e32, because
> > there
> > > > was
> > > > > > no
> > > > > > > usage at that time.
> > > > > >
> > > > > > It's been suggested before that we should revive this hypercall, 
> > > > > > and I
> > > > > > don't think it's a good idea.  Whenever a domain needs to know the
> > > > > > actual MFN of another domain's memory it's usually because the
> > > > > > security model is problematic.  In particular, finding the MFN is
> > > > > > usually followed by a brute-force mapping from a dom0 process, or
> by
> > > > > > passing the MFN to a device for unprotected DMA.
> > > > > >
> > > > > > These days DMA access should be protected by IOMMUs, or else
> > > > > > the device drivers (and associated tools) are effectively inside the
> > > > > > hypervisor's TCB.  Luckily on x86 IOMMUs are widely available (and
> > > > > > presumably present on anything new enough to run XenGT?).
> > > > > >
> > > > > > So I think the interface we need here is a please-map-this-gfn one,
> > > > > > like the existing grant-table ops (which already do what you need by
> > > > > > returning an address suitable for DMA).  If adding a grant entry for
> > > > > > every frame of the framebuffer within the guest is too much, maybe
> > we
> > > > > > can make a new interface for the guest to grant access to larger
> areas.
> > > > > >
> > > > >
> > > > > IIUC the in-guest driver is Xen-unaware so any grant entry would have
> > > > > to be put in the guests table by the tools, which would entail some
> > > > > form of flexibly sized reserved range of grant entries otherwise any
> > > > > PV driver that are present in the guest would merrily clobber the new
> > > > > grant entries.
> > > > > A domain can already priv map a gfn into the MMU, so I think we just
> > > > >  need an equivalent for the IOMMU.
> > > >
> > > > I'm not sure I'm fully understanding what's going on here, but is a
> > > > variant of XENMEM_add_to_physmap+XENMAPSPACE_gmfn_foreign
> > which
> > > > also
> > > > returns a DMA handle a plausible solution?
> > > >
> > >
> > > I think we want be able to avoid setting up a PTE in the MMU since
> > > it's not needed in most (or perhaps all?) cases.
> >
> > Another (wildly under-informed) thought then:
> >
> > A while back Global logic proposed (for ARM) an infrastructure for
> > allowing dom0 drivers to maintain a set of iommu like pagetables under
> > hypervisor supervision (they called these "remoteprocessor iommu").
> >
> > I didn't fully grok what it was at the time, let alone remember the
> > details properly now, but AIUI it was essentially a framework for
> > allowing a simple Xen side driver to provide PV-MMU-like update
> > operations for a set of PTs which were not the main-processor's PTs,
> > with validation etc.
> >
> > See http://thread.gmane.org/gmane.comp.emulators.xen.devel/212945
> >
> > The introductory email even mentions GPUs...
> >
> 
> That series does indeed seem to be very relevant.
> 
>   Paul

I'm not familiar with Arm architecture, but based on a brief reading it's
for the assigned case where the MMU is exclusive owned by a VM, so
some type of MMU virtualization is required and it's straightforward.

However XenGT is a shared GPU usage:

- a global GPU page table is partitioned among VMs. a shared shadow
global page table is maintained, containing translations for multiple
VMs simultaneously based on partitioning information
- multiple per-process GPU page tables are created by each VM, and
multiple shadow per-process GPU page tables are created correspondingly.
shadow page table is switched when doing GPU context switch, same as
what we did for CPU shadow page table.

So you can see above shared MMU virtualization usage is very GPU
specific, that's why we didn't put in Xen hypervisor, and thus additional
interface is required to get p2m mapping to assist our shadow GPU
page table usage.

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.