|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/2] libx86: Helper for clearing out-of-range CPUID leaves
>>> On 05.06.19 at 13:26, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 05/06/2019 10:45, Jan Beulich wrote:
>>>>> On 04.06.19 at 21:51, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> @@ -163,6 +170,58 @@ void x86_cpuid_policy_fill_native(struct cpuid_policy
> *p)
>>> recalculate_synth(p);
>>> }
>>>
>>> +void x86_cpuid_policy_clear_out_of_range_leaves(struct cpuid_policy *p)
>>> +{
>>> + unsigned int i;
>>> +
>>> + zero_leaves(p->basic.raw, p->basic.max_leaf + 1,
>>> + ARRAY_SIZE(p->basic.raw) - 1);
>>> +
>>> + if ( p->basic.max_leaf < 4 )
>>> + memset(p->cache.raw, 0, sizeof(p->cache.raw));
>>> + else
>>> + {
>>> + for ( i = 0; (i < ARRAY_SIZE(p->cache.raw) &&
>>> + p->cache.subleaf[i].type); ++i )
>>> + ;
>>> +
>>> + zero_leaves(p->cache.raw, i + 1,
>>> + ARRAY_SIZE(p->cache.raw) - 1);
>> Do you really want "i + 1" here? Wouldn't it be better to fully zap
>> subleaf i as well, when its type is zero? Same for leaf 0xb then.
>
> This is an awkward corner case which the manual doesn't cover adequately.
>
> "If ECX contains an invalid sub leaf index, EAX/EBX/ECX/EDX return 0.
> Sub-leaf index n+1 is invalid if sub-leaf n returns EAX[4:0] as 0."
>
> which makes the leaf with type 0 valid, rather than invalid.
>
> That said, from a "how it is likely to work in hardware" point of view,
> I highly suspect that a real CPUID instruction doesn't store anything
> for the first leaf with type 0, and relies on the "return all zeroes"
> property.
>
> Therefore, I will follow your suggestion and adjust the testcases
> accordingly.
>
>>
>>> + }
>>> +
>>> + if ( p->basic.max_leaf < 7 )
>>> + memset(p->feat.raw, 0, sizeof(p->feat.raw));
>>> + else
>>> + zero_leaves(p->feat.raw, p->feat.max_subleaf + 1,
>>> + ARRAY_SIZE(p->feat.raw) - 1);
>>> +
>>> + if ( p->basic.max_leaf < 0xb )
>>> + memset(p->topo.raw, 0, sizeof(p->topo.raw));
>>> + else
>>> + {
>>> + for ( i = 0; (i < ARRAY_SIZE(p->topo.raw) &&
>>> + p->topo.subleaf[i].type); ++i )
>>> + ;
>>> +
>>> + zero_leaves(p->topo.raw, i + 1,
>>> + ARRAY_SIZE(p->topo.raw) - 1);
>>> + }
>>> +
>>> + if ( p->basic.max_leaf < 0xd || !cpuid_policy_xstates(p) )
>>> + memset(p->xstate.raw, 0, sizeof(p->xstate.raw));
>>> + else
>>> + {
>>> + /* First two leaves always valid. Rest depend on xstates. */
>>> + i = max(2, 64 - __builtin_clzll(cpuid_policy_xstates(p)));
>>> +
>>> + zero_leaves(p->xstate.raw, i,
>>> + ARRAY_SIZE(p->xstate.raw) - 1);
>>> + }
>> In x86_cpuid_policy_fill_native() you're using 63 as the loop
>> bound, guaranteeing to ignore bit 63 in case it gains a meaning.
>
> It is currently "Reserved for XCR0 bit vector expansion", which will
> almost certainly be used in a similar manor to the
> secondary_exec_control_enable bit in VT-x.
>
>> Here and in x86_cpuid_copy_to_buffer() you don't. I'm slightly
>> worried that these code sequences will need changing (with no
>> way to easily notice) when CPUID_GUEST_NR_XSTATE gets
>> increased. But I'm not going to insist - for now the code is fine
>> as is (afaict).
>
> I think the code sequences are going to have to change no-matter-what.
> It is also safe at the moment because of the ARRAY_SIZE() expression
> stopping at bit 62, which was LWP.
>
> If this were to change, it would also have to include a min(63, ...)
> expression, because 64 - clz() is the correct expression for finding the
> first leaf to clobber in the general case.
>
> Would a BUILD_BUG_ON(ARRAY_SIZE(p->xstate.raw) > 63) be ok in the
> interim? I'd prefer where possible to not build in assumptions based on
> the array size.
Yes, that'd be fine with me. Perhaps, albeit unrelated, I could
talk you into also inserting such a statement at the other place
identified?
With the agreed upon adjustments made
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |