[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] question about memory allocation for driver domain





On 09/02/2015 18:53, Ian Campbell wrote:
On Mon, 2015-02-09 at 16:31 +0800, Julien Grall wrote:
It seems logical to me that destroy/create domd in a row working fine.
But this use-case is too simple :).

Let's imagine we decide to start classical domains (i.e no 1:1 mapping)
before creating domd (the 1:1 domain). As the free memory may be
sparsed, allocating one large RAM region may not work and therefore the
domain allocation fail.

On a similar idea, the host RAM may be split on multiple non-contiguous
banks. In this case, the RAM size of the 1:1 domain cannot be bigger
than the size of the bank. You will never know which bank is used, as
IIRC, the allocator behavior change between debug and non-debug build.
We had the same issue on DOM0 before the support of multiple banks has
been added. It sounds like you may want multiple bank support for an
upstream use case.

It seems to me that any use of 1:1 memory for !dom0 needs to be from a
preallocated region which is allocated for this purpose at boot and then
reserved for this specific allocation.

e.g. lets imagine a hypervisor option mem_11_reserve=256M,256M,128M
which would, at boot time, allocate 2x 256M contiguous regions and
1x128M one. When building a guest some mechanism (new hypercall, some
other other trickery etc) indicates that the guest being built is
supposed to use one of these regions instead of the usual domheap
allocator.

This would allow for a boot time configurable number of 1:1 regions. I
think this would work for the embedded use case since the domains which
have these special properties are well defined in size and number and so
can be allocated up front.

That seems a fair trade to use 1:1 mapping for domain. And it doesn't modify the allocator.


The next problem is ballooning. When the guest balloon out memory, the
page will be freed by Xen and can be re-used by another domain.

I think we need to do as we do for 1:1 dom0 here and not hand back the
memory on decrease reservation, but instead punch a hole in the p2m but
keep the mfn in reserve.

It sounds a fair trade in order to support 1:1 domain mapping.

IOW ballooning is not supported for such domains (we only go as far as
punching the hole to allow for the other usecase of ballooning which is
to make a p2m hole for the Xen backend driver to use for grant maps)

If I'm not mistaken, netback is balloon out to allocate some pages. But yeah, as you said, extending the DOM0 1:1 concept would avoid a such problem.

The code for the 1:1 mapping in Xen (aside the allocation) is domain-agnostic. Oleksandr, I think modifying is_domain_direct_mapped (include/asm-arm/domain.h) should be enough here.

The last problem but not the least is, depending on which backend you
are running in the 1:1 domain (such blkback), grant won't be mapped 1:1
to the guest, so you will have to use swiotlb in order to use the right
DMA address. For instance, without swiotlb, guest won't be able to use a
disk partition via blkfront. This because the backend is giving directly
the grant address to the block driver. To solve this, we have to use
swiotlb and set specific DMA callback. For now, there are only used for
DOM0.

Not much we can do here except extend the dom0 code here to
conditionally enable itself for other domains.

You mean in the guest kernel? Maybe we have to introduce a new feature flags indicating is the domain is using 1:1 mapping or not?

It would help in Xen code too.

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.