[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/5] VT-d support for PV guests



On Tue, May 20, 2008 at 03:10:38PM +0100, Espen Skoglund wrote:

> Anyhow, read-only access can indeed be supported for VT-d.  I just
> wanted to get basic PV guest support in there first.  Also, I'm not
> familiar with AMD's IOMMU, but I would guess that it also supports
> read-only access.

It does.

> > It would be good if you could provide a bit more detail on when
> > the patch populates IOMMU entries, and how it keeps them in
> > sync. For example, does the IOMMU map all the guest's memory, or
> > just that which will soon be the subject of a DMA? How synchronous
> > is the patch in removing mappings, e.g. due to page type changes
> > (pagetable pages, balloon driver) or due to unmapping grants?
> 
> All writable memory is initially mapped in the IOMMU.  Page type
> changes are also reflected there.  In general all maps and unmaps to
> a domain are synced with the IOMMU.  According to the feedback I got
> I apparently missed some places, though.  Will look into this and
> fix it.
> 
> It's clear that performance will pretty much suck if you do frequent
> updates in grant tables, but the whole idea of having passthrough
> access for NICs is to avoid this netfront/netback data plane scheme
> altogether.  This leaves you with grant table updates for block
> device access.  I don't know what the expected update frequency is
> for that one.
> 
> It must be noted that reflecting grant table updates in the IOMMU is
> required for correctness.  The alternative --- which is indeed
> possible --- is to catch DMA faults to such memory regions and
> somehow notify the driver to, e.g., drop packets or retry the DMA
> transaction once the IOMMU mapping has been established.

That would assume that the device can retry failed DMA's or otherwise
deal with them. The majority of devices can't.

> > There's been a lot of discussion at various xen summits about
> > different IOMMU optimizations (e.g. for IBM Summit, Power etc) and
> > I'd like to understand exactly what tradeoffs your implementation
> > makes. Anyhow, good stuff, thanks!

I think the major difference is that with Espen's patches all of the
PV guest's memory is exposed to the device (i.e., it provides
inter-guest protection, but no intra-guest protection). Our patches
aimed at providing both inter-guest and intra-guest protection, and
incurred a substantial performance hit (c.f., our OLS '07 paper on
IOMMU performance). There will be a paper at USENIX '08 by Willmann et
al., on different IOMMU mapping strategies which provide varying
levels of inter/intra guest protection and performance hit.

> I can't say I know much about those other IOMMUs, but as far as I
> know they are quite limited in that they only support a fixed number
> of mappings

The IBM Calgary/CalIOC2 family of IOMMUs support 4GB address spaces.

> and can not differentiate between different DMA sources (i.e., PCI
> devices).

Calgary/CalIOC2 have a per-bus translation table. In practice most
devices are on their own bus in these systems, so you get effectively
per-device translation translation tables.

Cheers,
Muli

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.