[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: correct CPUID output for out of bounds input



On 01/09/16 13:56, Jan Beulich wrote:
>>>> On 01.09.16 at 13:23, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 24/08/16 16:31, Jan Beulich wrote:
>>> Another place where we should try to behave like real hardware; see
>>> the code comments.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -3358,6 +3358,31 @@ void hvm_cpuid(unsigned int input, unsig
>>>      if ( !edx )
>>>          edx = &dummy;
>>>  
>>> +    if ( input & 0xffff )
>>> +    {
>>> +        /*
>>> +         * Requests beyond the highest supported leaf within a group return
>>> +         * zero on AMD and the highest basic leaf output on others.
>>> +         */
>>> +        unsigned int lvl;
>>> +
>>> +        hvm_cpuid(input & 0xffff0000, &lvl, NULL, NULL, NULL);
>> I have specifically deferred fixing this issue so far, because I don't
>> want to increase the quantity of recursion with hvm_cpuid().
>>
>> Also, because of the poor datastructure for domain cpuid, this adds 1
>> and possibly 2 extra loops over the unordered list.
>>
>>
>> On the way back from Toronto, I started experimenting with my
>> full-policy plans, including a structured information layout so
>> cpuid.basic.max_leaf can be found directly, and starting a guest_cpuid()
>> function intending to replace both pv_cpuid() and hvm_cpuid() in due course.
>>
>> Would you be amenable to leaving this issue as-is for now, until there
>> is a more efficient way of fixing it?
> If you get this ready for 4.8, yes. Otherwise I think the variant here
> is better than nothing until yours arrives.

There is no way it will be done for 4.8.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.