|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/cpuid: fix dom0 crash on skylake machine
On 01/06/16 14:28, Jan Beulich wrote:
>>>> On 01.06.16 at 15:03, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 01/06/16 13:01, Jan Beulich wrote:
>>>>>> I want to adjust the representation of cpuid information in struct
>>>>>> domain. The current loop in domain_cpuid() causes an O(N) overhead for
>>>>>> every query, which is very poor for actions which really should be a
>>>>>> single bit test at a fixed offset.
>>>>>>
>>>>>> This needs to be combined with properly splitting the per-domain and
>>>>>> per-vcpu information, which requires knowing the expected vcpu topology
>>>>>> during domain creation.
>>>>>>
>>>>>> On top of that, there needs to be verification logic to check the
>>>>>> correctness of information passed from the toolstack.
>>>>>>
>>>>>> All of these areas are covered in the "known issues" section of the
>>>>>> feature doc, and I do plan to fix them all. However, it isn't a couple
>>>>>> of hours worth of work.
>>>>> All understood, yet not to the point: The original remark was that
>>>>> the very XSTATE handling could be done better with far not as much
>>>>> of a change, at least afaict without having tried.
>>>> In which case I don't know what you were suggesting.
>>> Make {hvm,pv}_cpuid() invoke themselves recursively to
>>> determine what bits to mask off from CPUID[0xd].EAX.
>> So that would work. However, to do this, you need to query leaves 1,
>> 0x80000001 and 7, all of which will hit the O(N) loop in domain_cpuid()
>>
>> Luckily, none of those specific paths further recurse into {hvm,pv}_cpuid().
>>
>> I am unsure which to go with. My gut feel is that this would be quite a
>> performance hit, but I have no evidence either way. OTOH, it will give
>> the correct answer, rather than an approximation.
> Not only since I believe performance is very close to irrelevant for
> CPUID leaf 0xD invocations, I think I'd prefer correctness over
> performance (as would be basically always the case). How about
> you?
Right - this is the alternative, doing the calculation in
{hvm,pv}_cpuid(), based on top of your cleanup from yesterday.
There is a bugfix in the PV side (pv_featureset[FEATURESET_1c] should be
taken into account even for control/hardware domain accesses), and a
preemptive fix on the HVM side to avoid advertising any XSS states, as
we don't support any yet.
Thoughts?
~Andrew
Attachment:
0001-xen-x86-Clip-guests-view-of-xfeature_mask.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |