[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: correct CPUID output for out of bounds input



On 05/09/16 07:32, Jan Beulich wrote:
>>>> On 02.09.16 at 17:14, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 01/09/16 16:27, Jan Beulich wrote:
>>>>> +        {
>>>>> +            if ( d->arch.x86_vendor == X86_VENDOR_AMD )
>>>>> +            {
>>>>> +                *eax = 0;
>>>>> +                *ebx = 0;
>>>>> +                *ecx = 0;
>>>>> +                *edx = 0;
>>>>> +                return;
>>>>> +            }
>>>>> +            if ( input >> 16 )
>>>>> +                hvm_cpuid(0, &lvl, NULL, NULL, NULL);
>>>> Is this really the right way round?  The AMD method of "reserved always
>>>> as zero" is the more sane default to take.
>>> If anything I'd then say let's _always_ follow the AMD model.
>> It would certainly be better to default to AMD, and special case others
>> on an as-needed basis.
>>
>> Strictly speaking, following the AMD model is compatible with the
>> "Reserved" nature specified for Intel.
>>
>> Lets just go with this.
> Done. But before sending v2, just to be clear: The group check
> which you also didn't like won't go away. That's because if we didn't
> do it, we'd hide all CPUID info outside the basic and extended group,
> in particular (in case we run virtualized ourselves) and leaves coming
> from the lower level hypervisor (most notably our own ones, if it's
> another Xen underneath).

Architecturally speaking, it is not ok for any of our guests to be able
to see our hypervisors leaves.

The current call to cpuid_hypervisor_leaves() will clobber information
from the outer hypervisor anyway (although I observe that dom0 running
under Xen under Xen can observe the outer Xen leaves if L1 is configured
with both Xen+Viridian).

I will be fixing this information leakage.  As for these bounds checks,
I would put the checks after the cpuid_hypervisor_leaves() call, and
exclude everything other than the base and extended groups.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.