[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Hackathon minutes] PV network improvements

On Mon, 2013-05-20 at 19:31 +0100, Wei Liu wrote:
> On Mon, May 20, 2013 at 03:08:05PM +0100, Stefano Stabellini wrote:
> [...]
> > J) Map the whole physical memory of the machine in dom0
> > If mapping/unmapping or copying slows us down, could we just keep the
> > whole physical memory of the machine mapped in dom0 (with corresponding
> > IOMMU entries)?
> > At that point the frontend could just pass mfn numbers to the backend,
> > and the backend would already have them mapped.
> > >From a security perspective it doesn't change anything when running
> > the backend in dom0, because dom0 is already capable of mapping random
> > pages of any guests. QEMU instances do that all the time.

Actually there are mechanisms in place to remove this privilege from
dom0, specifically there is an XSM class (terminology?) for
non-migratable domains which effectively equates to exactly this
restriction. Of course you need stub qemu too.

> > But it would take away one of the benefits of deploying driver domains:
> > we wouldn't be able to run the backends at a lower privilege level.
> > However it might still be worth considering as an option? The backend is
> > still trusted and protected from the frontend, but the frontend wouldn't
> > be protected from the backend.
> > 
> I think Dom0 mapping all machine memory is a good starting point. As for
> the driver domain, can we not have a driver domain mapped all of its
> target's machine memory? What's the security implication here?

It gives the driver domain an enormous amount of privilege which it
doesn't require and which it could use to compromise the integrity of
the system (i.e. to snoop any guest's memory and extract "secrets"). It
effectively reduces our security/isolation story to "effectively
equivalent to KVM" and this isolation is one of the big selling points
for Xen. I don't think we should go down this path either for dom0 or
driver domains and I am absolutely positive that there are other
approaches we should be investigating before we even start to consider

George's idea of not flushing at unmap time, with co-operation from the
frontend to not reuse the pages until it has batched up a bigger flush
seems like an interesting one to look into. By choosing the sizes and
times correct it may even be that by the time domU wants to reuse the
page the TLB has already been flushed for some other reason (context
switch etc) and the hypervisor can elide the expense.

There are probably mechanisms in the guest kernels which allow us to
hold on to memory but still provide a memory pressure hook so we can
flush immediately instead of OOMing.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.