[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 10/10] x86/cpuid: Always enable faulting for the control domain



>>> On 22.02.17 at 11:00, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 22/02/17 09:23, Jan Beulich wrote:
>>>>> On 20.02.17 at 12:00, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> The domain builder in libxc no longer depends on leaked CPUID information to
>>> properly construct HVM domains.  Remove the control domain exclusion.
>> Am I missing some intermediate step? As long as there's a raw
>> CPUID invocation in xc_cpuid_x86.c (which is still there in staging
>> and I don't recall this series removing it) it at least _feels_ unsafe.
> 
> Strictly speaking, the domain builder part of this was completed after
> my xsave adjustments.  All the guest-type-dependent information now
> comes from non-cpuid sources in libxc, or Xen ignores the toolstack
> values and recalculates information itself.
> 
> However, until the Intel leaves were complete, dom0 had a hard time
> booting with this change as there were no toolstack-provided policy and
> no leakage from hardware.

So what are the CPUID uses in libxc then needed for at this point?
Could they be removed in a prereq patch to make clear all needed
information is now being obtained via hypercalls?

>> Also the change here then results in Dom0 observing different
>> behavior between faulting-capable and faulting-incapable hosts.
>> I'm not convinced this is desirable.
> 
> I disagree.  Avoiding the leakage is very desirable moving forwards.
> 
> Other side effects are that it makes PV and PVH dom0 functionally
> identical WRT CPUID, and PV userspace (which, unlikely the kernel, tends
> not to be Xen-aware) sees sensible information.

I can see the upsides too, hence the "I'm not convinced" ...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.