[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Hackathon minutes] PV network improvements

On Mon, May 20, 2013 at 03:08:05PM +0100, Stefano Stabellini wrote:
> Hi all,
> these are Konrad's and my notes (mostly Konrad's) on possible
> improvements of the PV network protocol, taken at the Hackathon.

Just for completeness, these items are future working items. I'm now
upstreaming my queues to lay a baseline for these items, which include:

1. split event channels support (generally useful)
2. netback global page pool (prerequisite for 1:1 model)
3. kthread + NAPI 1:1 model (prerequisite for multiqueue)

> A) Network bandwidth: multipage rings
> The max outstanding amount of data the it can have is 898kB (64K of
> data use 18 slot, out of 256. 256 / 18 = 14, 14 * 64KB).  This can be
> expanded by having multi-page to expand the ring. This would benefit NFS
> and bulk data transfer (such as netperf data).

This is in my queue as well. It's generic change in xenbus interface
which can benefit not only network but also block device.

> J) Map the whole physical memory of the machine in dom0
> If mapping/unmapping or copying slows us down, could we just keep the
> whole physical memory of the machine mapped in dom0 (with corresponding
> IOMMU entries)?
> At that point the frontend could just pass mfn numbers to the backend,
> and the backend would already have them mapped.
> >From a security perspective it doesn't change anything when running
> the backend in dom0, because dom0 is already capable of mapping random
> pages of any guests. QEMU instances do that all the time.
> But it would take away one of the benefits of deploying driver domains:
> we wouldn't be able to run the backends at a lower privilege level.
> However it might still be worth considering as an option? The backend is
> still trusted and protected from the frontend, but the frontend wouldn't
> be protected from the backend.

I think Dom0 mapping all machine memory is a good starting point. As for
the driver domain, can we not have a driver domain mapped all of its
target's machine memory? What's the security implication here?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.