[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] x86/cpuid: support LFENCE always serializing CPUID bit


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 14 Apr 2021 14:41:07 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dqtg7qa7mTbcV8PXzy13ZVTCs6LATias4ge66MlhjOk=; b=RtTARlLLoT7C3vJBYJVf19vyPLf3Rq862psqWFTptCxTADM0R4CkUqg5qaQQJKydna6Gg6VECwWuLCcI1oDuZgi7V33TUT9okSKTgIDJ5/U+0tMASf706KmB1mAKgUAmHT5KmRLKID7H4uBONNffuAxoYBL3HLUkeFJ/1qftXpQhnjLdwmnOarh5ZBwuLJyanHQIxQdTjrKlhu2GX1/98V52aTO+o+Zvjoo7vnLN9tdjqI7SSJawP1dlMK44JzHStoUvHRU42CpFHcR6D+plaCab6gvg1PYPY4zkO/TrzdqtM3PIxopiszgMsk4Sz/79RaaeqGACvbpOw6BxS62O6A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g+XQo6Ac+dtLHQcSDk1MiAuHyOlN/Iz5Q0RbuK9V3A2M+mXgTv0sdcTXIY6FgJZ66PSRmlGRO7caILbAjAxfi0083l1RT7UmjeuCifgDRQZcPOO4aq8fXOFgRTj9PDWhrZJyR2e6fOjB2cE49TYy3Y3JLShiuQhuuIoFFFMIg5I/Mm2v6iGc4tYhAmJ0OKqrbGZP0N0GvDI9N50DFsZduGUtPEj6WhdYg2k5pQ3axob3tXqEMk1+M63jpPkU4KKHglwzr5WUPa051PI3X6nhSwZ3DsDTTa4ZWQWc4uMOo7ypUl9n2K/0yyMLCUy6WyRnX1WrHmdwptV2YoMFQledaA==
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Wei Liu <wl@xxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Delivery-date: Wed, 14 Apr 2021 13:41:22 +0000
  • Ironport-hdrordr: A9a23:HrZFgaOjpgB/H8BcT0jw55DYdL4zR+YMi2QD/3taDTRIb82VkN 2vlvwH1RnyzA0cQm0khMroAsS9aFvm39pQ7ZMKNbmvGDPntmyhMZ144eLZqQHIMxbVstRQ3a IIScNDIfX7B1RikILe6A63D94vzLC8gduVrM31pk0dLj1CQadm8gt/F0K/Gkp5WAFJCfMCZe Chz+BAoCetfmlSU9SjChA+LqL+jvDotLajWx4JABY79BKD5AnF1JfWGwWVty1uNA9n7qwl9Q H+8jDRxqLmiP2jzw+Z6mm71eUvpPLE6v9uQPOBkdIULDKEsHfkWK1EV6eZtD445MGDgWxa9u XkmBsrM8Rt5365RAjcznXQ8jLt3zo053jpxUXwuwqbneXCWDk4B8Bd7LgpECfx1ks6sNlwlI JN0m6J3qAnaS/ooSXn69DEEyxtj0q/yEBS9NI7sno3a+sjQY4UibZa0FJeEZ8GEi6/wpsgCv NSAMbV4+sTWU+GbljC11Mfj+CEbzAWJFOrU0ICssua33x9h3Zi1XYVw8QZgzMp6I89cZ9Z/O 7JW54Y2o1mf4szV+ZQFe0BScy4BijmWhTXKl+fJlzhCeUhN2/SrYX0pJE4/vujdpBN7JZaou WBbHpo8UoJP27+A8yH25NGtjrXRn+mYDjrwsZCo5djvLnxQ6fqLD2DRFgin9DImYRdPuTrH9 KIfL5GCf7qKmXjXaxT2RflZpVUIX4CFM0Z0+xLGW6mk4buEMnHp+bbePHcKP7GCjA/QF7yBX MFQXz2LMVE5Ua7R2/gjHHqKjbQU3262ag1PLnR/uAVxoRIHJZLqBIphVOw4dzOLyZDvKwwdE 53O6jmjau/uGmz8Q/zniVUEysYKnwQzKTrUntMqwNPGVjza6w/t9KWfn0XwGGKPQZlT8TdEB dWolN+/a7fFe3V+QkST/acdk6KhXoao3yHC6oGkqqY/MH/Z9cTFZA9QpF8Eg3NCj14kQtns3 14dQcBX0PTfwmezpmNvdgxPqX/f8M5qBq3KcRUwEivyHm0lIUKfD8neBKAFeSQmh0jQjJIgE YZyd5ivJOw3RC1KWU+h+wkNkZrc2r/OsMLMC2MeJhUlrf3eAt5UGeNgniAhwsuf3fxnn9i+V DJPGmaf+rGDUFavW0d2qH28ElsfmHYZE5obGtm2LcNXFjuqzJ20eWRYLC03HbUYlwewvsFOD WtW0pkHip+g9S23gWSgjCMCDEvwYgvJPXUCPAmf6vI0n2gbI2OmqduJY4jwL91cNTvuPQMS+ SRZkucKy75Efog30iNvWk+URME2kUMgLftwlno/WK41HkwDb7bJ0lnXagSJ5WZ43L/T/iF3Z 1lhbsOzKONG3S0bsTDxbDcbjZFJB+Wu2KwQu0yoZ1fvK45ttJIbu3meCqN0GsC0AQ1Lc/ymk 9bXb9y56rZPJRzO8MVYCBU8zMS5Z+yBVputhazBOAwfVsg1SCGe9yI5qfFsropDAmKohDqNV yW7i1a+LPEUkK4pM4nIrN1JX4Tbk42rGlm9qeFcYbbDQ2xbeFN/FagKBaGAcpgYbnAHa9Vtw pw5tGDgvSeeCX50h3BpDcTGNM+z0+3BcepRB+WEeFG89amKU2Bj6uj7sm0lir2Q1KAGjclrJ wAc1cRYMRFgiQji4Ny0jHacN2Inn4Y
  • Ironport-sdr: 7Vbzq3/NvzhaMz4EpoN7aiFGUi3uCmgNlmU7+Loj6N43WWG98VW+5bX3OOeeV4nenURCzDecPv fje8JMXHN5W51Su9/O92YwaTQz6sK5NCf0i8DyXU40K1IbfezNrsB+BmOiB0sWC/FIWaHQ2AKo 3NZXf8vSJ7THqd2Kxqnbt6glarL4s4E6/o+N9fbKZbcendN4a6T8GsBt+wnPVrqVb2dkAx/Q0D GOIM0Jqy27EAkiPYpXAL5HhGiz3ta4J6GSOwaRt7amTRD8ek0N5xmC6rYU90/iHGMfyt3ViD88 VKg=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 14/04/2021 14:24, Jan Beulich wrote:
> On 14.04.2021 15:05, Andrew Cooper wrote:
>> On 14/04/2021 13:57, Jan Beulich wrote:
>>> On 14.04.2021 13:04, Roger Pau Monne wrote:
>>>> @@ -264,6 +265,38 @@ struct cpuid_policy
>>>>              };
>>>>              uint32_t nc:8, :4, apic_id_size:4, :16;
>>>>              uint32_t /* d */:32;
>>>> +
>>>> +            uint64_t :64, :64; /* Leaf 0x80000009. */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000000a - SVM rev and features. 
>>>> */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000000b. */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000000c. */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000000d. */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000000e. */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000000f. */
>>>> +            uint64_t :64, :64; /* Leaf 0x80000010. */
>>>> +            uint64_t :64, :64; /* Leaf 0x80000011. */
>>>> +            uint64_t :64, :64; /* Leaf 0x80000012. */
>>>> +            uint64_t :64, :64; /* Leaf 0x80000013. */
>>>> +            uint64_t :64, :64; /* Leaf 0x80000014. */
>>>> +            uint64_t :64, :64; /* Leaf 0x80000015. */
>>>> +            uint64_t :64, :64; /* Leaf 0x80000016. */
>>>> +            uint64_t :64, :64; /* Leaf 0x80000017. */
>>>> +            uint64_t :64, :64; /* Leaf 0x80000018. */
>>>> +            uint64_t :64, :64; /* Leaf 0x80000019 - TLB 1GB Identifiers. 
>>>> */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000001a - Performance related 
>>>> info. */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000001b - IBS feature 
>>>> information. */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000001c. */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000001d - Cache properties. */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000001e - Extd APIC/Core/Node 
>>>> IDs. */
>>>> +            uint64_t :64, :64; /* Leaf 0x8000001f - AMD Secure 
>>>> Encryption. */
>>>> +            uint64_t :64, :64; /* Leaf 0x80000020 - Platform QoS. */
>>>> +
>>>> +            /* Leaf 0x80000021 - Extended Feature 2 */
>>>> +            union {
>>>> +                uint32_t e21a;
>>>> +                struct { DECL_BITFIELD(e21a); };
>>>> +            };
>>>> +            uint32_t /* b */:32, /* c */:32, /* d */:32;
>>>>          };
>>>>      } extd;
>>> Due to the effect of this on what guests get to see, I think this
>>> wants to take my "x86/CPUID: shrink max_{,sub}leaf fields according
>>> to actual leaf contents" as a prereq, which in turn may better
>>> remain on top of "x86/CPUID: adjust extended leaves out of range
>>> clearing" (both are neighbors in that over 4 months old series,
>>> fair parts of which could imo go in irrespective of the unsettled
>>> dispute on xvmalloc() - unfortunately I had made that patch 2 of
>>> the series, not expecting it to be blocked for so long, and then
>>> presumably signaling to others that the rest of the series is also
>>> blocked).
>> There is no shrinking to be done in this case.  The bit is set across
>> the board on AMD/Hygon hardware, even on older parts.
>>
>> What does need changing however is the logic to trim max_extd_leaf down
>> to what hardware supports, so the bit is visible on Rome/older
>> hardware.  I.e. after this change, all VMs should get 0x80000021 by
>> default on AMD hardware.
> As to this last sentence - not really. Only if we managed to set the
> flag (if it needs setting).

... or we find it already set.  Remember that we don't even offer an
option to let the user avoid this behaviour.

The only case where we'll boot with lfence not being dispatch
serialising is when we're virutalised under a pre-2018 hypervisor with
has no Spectre knowledge.  There are more important things in life, than
to worry about this case.

> It's a two-way thing really: If the flag ends up set, I agree there's
> no need to shrink max_extd_leaf. But if the flag ends up clear, the
> issue we have today (hence my patch) becomes worse: Trailing all zero
> leaves would result because we don't properly reduce the maximum from
> what hardware reports to what is actually populated. In any event - I
> think I'd much rather see issues with my patch pointed out, if any.

I don't have a problem with the shrinking change in principle, but it is
definitely not a prereq to this change.

Trailing zeroes aren't going to cause guests to malfunction, even if
we'd ideally be neater about the result.

~Andrew




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.