[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] (v2) Design proposal for RMRR fix

[ BTW, Konrad, could you do a bit of quote trimming when quoting such a
long e-mail?  It takes a non-trivial amount of time to figure out where
you've actually said something. Thanks. :-) ]

On 01/13/2015 04:45 PM, Konrad Rzeszutek Wilk wrote:
>> STEP-1. domain builder
>> say the default layout w/o reserved regions would be:
>>      lowmem:         [0, 0xbfffffff]
>>      mmio hole:      [0xc0000000, 0xffffffff]
>>      highmem:        [0x100000000, 0x140000000]
>> domain builder then queries reserved regions from xenstore, 
>> and tries to avoid conflicts.
> Perhaps an easier way of this is to use the existing
> mechanism we have - that is the XENMEM_memory_map (which
> BTW e820_host uses). If all of this is done in the libxl (which
> already does this based on the host E820, thought it can
> be modified to query the hypervisor for other 'reserved
> regions') and hvmloader is modified to use XENMEM_memory_map
> and base its E820 on that (and also QEMU-xen), then we solve
> this problem and also the http://bugs.xenproject.org/xen/bug/28?
> (lots of handwaving).

Hmm -- yes, since we have that, that might be a better option.

Having qemu-upstream read XENMEM_memory_map for a domain would avoid
having to pass a massive long set of parameters to qemu for RMRRs (and
would allow us to get rid of mmio_hole_size as well).

But I don't think by itself that it will fix
http://bugs.xenproject.org/xen/bug/28, because that's ultimately about
hvmloader *moving* memory around after the domain has been created.
We'd still need to add a way for hvmloader to tell qemu about changes to
the memory map on-the-fly.

Would we need to have hvmloader then update the e820 in Xen as well, so
that future calls to XENMEM_memory_map returned accurate values?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.