[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 12/15] x86: add iommu_op to enable modification of IOMMU mappings



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> Sent: 07 August 2018 09:38
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson
> <Ian.Jackson@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Julien Grall
> <julien.grall@xxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>
> Subject: RE: [Xen-devel] [PATCH v5 12/15] x86: add iommu_op to enable
> modification of IOMMU mappings
> 
> > From: Paul Durrant [mailto:Paul.Durrant@xxxxxxxxxx]
> > Sent: Tuesday, August 7, 2018 4:33 PM
> >
> > >
> > > > From: Paul Durrant
> > > > Sent: Saturday, August 4, 2018 1:22 AM
> > > >
> > > > This patch adds an iommu_op which checks whether it is possible or
> > > > safe for a domain to modify its own IOMMU mappings and, if so,
> > creates
> > > > a rangeset to track modifications.
> > >
> > > Have to say that there might be a concept mismatch between us,
> > > so I will stop review here until we get aligned on the basic
> > > understanding.
> > >
> > > What an IOMMU does is to provide DMA isolation between devices.
> > > Each device can be hooked with a different translation structure
> > > (representing a different bfn address space). Linux kernel uses this
> > > mechanism to harden kernel drivers (through dma APIs). Multiple
> > > devices can be also attached to the same address space, used by
> > > hypervisor when devices are assigned to the same VM.
> > >
> >
> > Indeed.
> >
> > > Now with pvIOMMU exposed to dom0, , dom0 could use it to harden
> > > kernel drivers too. Then there will be multiple bfn address spaces:
> > >
> > > - A default bfn address space created by Xen, where bfn = pfn
> > > - multiple per-bdf bfn address spaces created by Dom0, where
> > > bfn is completely irrelevant to pfn.
> > >
> > > the default space should not be changed by Dom0. It is attached
> > > to devices which dom0 doesn't enable pviommu mapping.
> >
> > No that's not the point here. I'm not trying to re-architect Xen's IOMMU
> > handling. All the IOMMU code in Xen AFAICT is built around the assumption
> > there is one set of page tables per-VM and all devices assigned to the VM
> > get the same page tables. I suspect trying to change that will be a huge can
> > of worms and I have no need to go there for my purposes.
> 
> don't just think from Xen side. think about what Dom0 feels about
> this IOMMU.
> 
> ideally pviommu driver is a new vendor driver attached to iommu
> core within dom0. it needs to provide iommu dma ops to support
> dma_alloc/map operations from different device drivers. iommu
> core maintains a separate iova space for each device, so device
> drivers can be isolated from each other.

But there is nothing that means that the IOVA space cannot be global, and that 
is good enough for a PV dom0.

> 
> Now dom0 got only one global space. then why does dom0 need
> to enable pviommu at all?

As I explained in another reply, it is primarily to allow a PV dom0 to have a 
BFN:GFN map. Since a PV domain maintains its own P2M then it is the domain that 
maintains the mapping. That is all I need to do.

> 
> >
> > >
> > > per-bdf address spaces can be changed by Dom0, attached to
> > > devices which dom0 enables pviommu mapping. then pviommu ops
> > > should accept a bdf parameter. and internally Xen needs to maintain
> > > multiple page tables under dom0, and find a right page table according
> > > to specified bdf to complete the operation.
> > >
> > > Now your series look assuming always just one bfn address space
> > > cross all assigned devices per domain... I'm not sure how it works.
> > >
> >
> > It does make that assumption because that assumption is baked into Xen's
> > IOMMU support.
> >
> > > Did I misunderstand anything?
> >
> > Only perhaps that moving away from per-VM IOMMU pagetables will be
> > something that is something I could do without making very invasive and
> > lengthy changes to Xen's IOMMU code.
> >
> 
> it's a must imo.

I may was well give up then. That's a mammoth task akin to e.g. moving to 
per-vcpu rather than per-domain P2Ms. I don't have a spare year to do the work.

  Paul

> 
> Thanks
> Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.