[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/cpuid: Clobber CPUID leaves 0x800000{1d..20}


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Date: Thu, 7 Apr 2022 10:25:30 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Qn2okBuQiWWfDw22hd6NnZw2j/XExmOdZFxMH5E+Wno=; b=l6y9bFYmf9WUXaKlG72HURUm8/EiUynlI0xPGFPgHZ99uyU+U4cGQpdBbRly/F3ZhlfvCAw1o/OeDF5QZb94QKJfsHEGRoC6/7A5dLrtgDcXROUaOsz0nfj3xjUdIebA/QO0+0HaLEXiU9VkisBFhjV6scwCibW7gaqv+HtzvUr3QN0KN7Ni2LjBSd9NqUL5EIIPDPpXnkU/VVVYSeyiwi7b14LCiv133CtuenHiw8JKFFGaun8UY8tLTc9JFuX/V1CYUl4Jzt4kbP71Syd/1npwSlJ69JbqfofHRHHdIorJeYZrZDACZbHSj7RfpLEQMyMsskUOG7siySFYDuyB3g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=b+w3CrtuCTMIIFmdi3kvEdPx1Fif81sWKl1Yu32LYj338GTkfsz2daV6/uIoa6F81iE9hNMnYRbx3jR0U916MgKQsJW0R+tJsa1S3baQ8Pw2UcM5LVdLh9Hsihrc0GpsOc1l2A3E7MktHOwVfq/djqQX3XriltqJ43qFGfyPBFqco5Ycfe1Ml5L5S9MjjVIDrFhSfvADfMvHKvMCILjmwAQArWxWlvzrjz/YykcOXpb1E+omqzG5avgBEOA+yqq2PSn7cHJlT2EgJNOU8FJQU4GqPXQgUkgirXbNyGfeokYR2iCVO12TUqDAFXjxASoUNa4fHdUXQbVYGGhOTjYblg==
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 07 Apr 2022 10:26:05 +0000
  • Ironport-data: A9a23:1hYLm6Oibq5XdIDvrR3slsFynXyQoLVcMsEvi/4bfWQNrUpzhDNVn 2EaCmuPP/3ZYmLyKd4iPo/j9UMO6MWHm4VqHAto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h ynLQoCYdKjYdleF+lH1dOKJQUBUjclkfJKlYAL/En03FFcMpBsJ00o5wbZl2tEw2LBVPivW0 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z9 JZom4TzcVYQLIrNheJBTRpBAT1vIvgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr 7pCcmlLN03dwbLtqF64YrAEasALBc/nJo4A/FpnyinUF60OSpHfWaTao9Rf2V/cg+gQTayCO pdEMVKDajzuW0xeA0clGKlvt9fyjGjgSTNmjles8P9fD2/7iVYZPKLWGMrYfJmGSNtYmm6cp 3na5CLpDxcCLtudxDGZtHW2iYfngifTSI8UUrqi+ZZCglee22gSAx0+TkagrL+yjUvWZj5EA xVKoGx09/F0rRH1CImmN/GlnEO5Utcnc4M4O8Ux6R2Xy7qS5ACcB2MeSSVGZsBgv8gzLQHGH HfQwbsF2RQHXGWpdE+g
  • Ironport-hdrordr: A9a23:T7IIhauRtD9neiU2KVwCRVmA7skC2IMji2hC6mlwRA09TyXGra +TdaUguSMc1gx9ZJh5o6H8BEGBKUmskKKceeEqTPiftXrdyReVxeZZnMXfKlzbamHDH4tmu5 uIHJIOceEYYWIK7voSpTPIaerIo+P3sZxA592ut0uFJDsCA8oLjmdE40SgYzZLrWF9dMEE/f Gnl656Tk+bCBIqh7OAdx44tob41r/2vaOjRSRDKw8s6QGIgz/twqX9CQKk0hAXVC4K6as+8E De+jaJppmLgrWe8FvxxmXT55NZlJ/K0d1YHvGBjcATN3HFlhuoXoJ8QLeP1QpF491HqWxa0u UkkS1Qe/ib2EmhOV1dZiGdnTUI5QxerkMKD2Xo2EcL7/aJHA7SQPAx+r6xOiGplXbI+usMip 6jlljpx6a+R3n77VXAzsmNWBdwmkWup30+1eYVknxESIMbLKRctIoF4SpuYdw99Q/Bmcka+d NVfYnhDTdtACenRmGcunMqzM2nX3w1EBvDSk8eutaN2zwTmHxi1UMXyMEWg39FrfsGOtR5zv WBNr4tmKBFT8cQY644DOAdQdGvAmiIRR7XKmqdLVnuCalCMXPQrJz85qkz+YiRCdY15Yp3nI 6EXEJTtGY0dU6rAcqS3IdT+hSIW2m5VSSF8LAp23G4gMyKeFPGC1zwdLl1qbrSnxw2OLyvZ8 qO
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYShse3pTPowzhzUqzya5673Xmyazj/KiAgABCvgA=
  • Thread-topic: [PATCH] x86/cpuid: Clobber CPUID leaves 0x800000{1d..20}

On 07/04/2022 07:26, Jan Beulich wrote:
> On 07.04.2022 03:01, Andrew Cooper wrote:
>> c/s 1a914256dca5 increased the AMD max leaf from 0x8000001c to 0x80000021, 
>> but
>> did not adjust anything in the calculate_*_policy() chain.  As a result, on
>> hardware supporting these leaves, we read the real hardware values into the
>> raw policy, then copy into host, and all the way into the PV/HVM default
>> policies.
>>
>> All 4 of these leaves have enable bits (first two by TopoExt, next by SEV,
>> next by PQOS), so any software following the rules is fine and will leave 
>> them
>> alone.  However, leaf 0x8000001d takes a subleaf input and at least two
>> userspace utilities have been observed to loop indefinitely under Xen 
>> (clearly
>> waiting for eax to report "no more cache levels").
>>
>> Such userspace is buggy, but Xen's behaviour isn't great either.
>>
>> In the short term, clobber all information in these leaves.  This is a giant
>> bodge, but there are complexities with implementing all of these leaves
>> properly.
>>
>> Fixes: 1a914256dca5 ("x86/cpuid: support LFENCE always serialising CPUID 
>> bit")
>> Link: https://github.com/QubesOS/qubes-issues/issues/7392
>> Reported-by: fosslinux <fosslinux@aussies.space>
>> Reported-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

Thanks.

>
>> It turns out that Intel leaf 4 and AMD leaf 0x8000001d are *almost* 
>> identical.
>> They differ by the "complex" bit in edx, and the $X-per-cache fields in the
>> top of eax (Intel is threads-per-cache, AMD is cores-per-cache and lacks the
>> cores-per-package field).
>>
>> As neither vendor implement each others version, I'm incredibly tempted to
>> reuse p->cache for both, rather than doubling the storage space.  Reading the
>> data out is easy to key on p->extd.topoext.  Writing the data can be done
>> without any further complexity if we simply trust the sending side to have 
>> its
>> indices the proper way around.  Particularly, this avoids needing to ensure
>> that p->extd.topoext is out of order and at the head of the stream.  
>> Thoughts?
> Sounds quite reasonable to me. I guess the main risk is for new things
> to appear on either vendor's side in a way breaking the overlaying
> approach. But I guess that's not a significant risk.

Neither of the vendors are going to change it in incompatible ways to
how they currently expose it, and it's data that Xen doesn't
particularly care about it - we never interpret it on behalf of the guest.

When we fix the toolstack side of things to calculate topology properly,
the $foo-per-cache fields need adjusting, but that logic will be fine to
switch ( vendor ) on.  Since writing this, I found AMD's
cores-per-package and it's in the adjacent leaf with a wider field.

> As to ordering dependencies: Are there any in reality? Neither vendor
> implements the other vendor's leaf, so there's only going to be one in
> the stream anyway, and which one it is can be disambiguated by having
> seen leaf 0 alone.

The complexity is what (if anything) we do in
x86_cpuid_copy_from_buffer().  I've done some prototyping, and the
easiest option is to accept both 4 and e1Dd, in a latest-takes-precedent
manner precedent, and that we don't create interlinkage with the topoext
bit.

I've also got a pile of fixes to the unit tests so we hopefully can't
make mistakes like this again, although that will depend on getting
test-cpuid-policy running in OSSTest which is a todo list item which
really needs to get done.

~Andrew

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.