[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PV-vNUMA issue: topology is misinterpreted by the guest



On 27/07/15 17:02, Dario Faggioli wrote:
> On Mon, 2015-07-27 at 16:13 +0100, David Vrabel wrote:
>> On 16/07/15 11:32, Dario Faggioli wrote:
>>>
>>> Anyway, is there anything we can do to fix or workaround things?
>>
>> This thread has gotten a bit long...
>>
> Yep, indeed... :-(
> 
>> For Linux I would like to see:
>>
>> 1. No support for NUMA in PV guests -- if you want new MM features in a
>> guest use HVM.
>>
> Wow... Really? What about all the code we have in libxl and Xen to deal
> exactly with that?  What about making it possible to configure vNUMA for
> Dom0?

I don't think there is any (much?) PV-specific code in Xen/toolstack for
this, right?  It's common between HVM and PV, yes?

I would prefer effort into making a no-dm HVM dom0 work instead because
this is a better long-term solution for dom0.

That said, if someone does the work for vNUMA in Linux PV guests and it
looks sensible and self-contained then I would probably merge it.

>> 2. For HVM guests, use the existing hardware interfaces to present NUMA
>> topology.  i.e., CPUID, ACPI tables etc.  This will work for both kernel
>> and userspace and both will see the same topology.
>>
>> This also has the advantage that any hypervisor/toolstack work will also
>> be applicable to other guests (e.g., Windows).
>>
> Yeah, indeed. That's the downside of Juergen's "Linux scheduler
> approach". But the issue is there, even without taking vNUMA into
> account, and I think something like that would really help (only for
> Dom0, and Linux guests, of course).

I disagree.  Whether we're using vNUMA or not, Xen should still ensure
that the guest kernel and userspace see a consistent and correct
topology using the native mechanisms.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.