[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/5] x86/cpuid: Simplify recalculate_xstate()


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 5 May 2021 15:53:49 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0BeIkc5YgV60FmIqnHHCOUHZuk1rbyq/d/rUB/u3jwQ=; b=niSk8a4UGbOJHPVIR0A4vSOXLq79+SCqeJ3soQLmuIoVrQ/EJf7HoEKt4+6/1s8Gvfq1dHX0Hub8nVQs/5pae8Bm/eUnL/vfeJrJMFsCqyo9XdbynDrQRUjU7fYLIWtFztM4PPTTdtlm6u6Np3SBNxjEMh9IqhANkCjPlC/ORofBRGreFPP5IZwTqXTk5lK3SzQSF549B3qYzoC2yan+QpIX6qQ9TUnIjehMOtHSIeiFP5XlaJz1TV205L79IqhE+52Ef3EpgzMu+OIyZDauZSqkjY9k2pyNxWJE8KpZsi9oJdI6XKNVuGYnepd5Q9PP/e22kQ52u+VFr/hNdI+vhg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=R/W8m54rXawgzNQUvM5+onSbZD6QEtey4arkDOoCvmUgignWOEW746n8BF9jf+Ty+eRNw+kEkfcmdwzt+zuogiRPZ7cYXU/DaQ8VM+c/dax1psQ9s3bLnPGEvscEF5SZTM0RojHGa6z76IUbp7U6vlx1hd+0RAFdCzuWDXSeOoGLhP9trR6TA2KZo9nss5pBW00GQmGfJJtUFNWsZ8zFHFvz+RvChq7+n4su5w2bdeu4ZSW+9PNi0U1hLyJVdD/TxWrLJ3TkOILgewvDGNBVYNAsM0gIgOyk5J6kan3sqz7LAA43iY28Y9/3lYoidlzLc39qESW/4zlsT0NfIo7FmA==
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 05 May 2021 14:54:07 +0000
  • Ironport-hdrordr: A9a23:ziqRtK8S9HVZnvZ8XBZuk+F1cL1zdoIgy1knxilNYDRvWIixi9 2ukPMH1RX9lTYWXzUalcqdPbSbKEmyybdc2qNUGbu5RgHptC+TLI9k5Zb/2DGIIUHD38Zn/+ Nbf6B6YeecMXFTkdv67A6kE9wp3dmA9+SSif3Dymp2JDsKV4hLxW5Ce2OmO2dxQxRLAod8MZ Ka6NZOqTbIQwVpUu2QAH4ZU+/f4+DRnJX9bhIcQzIh4g+CjTSngYSKbySw9BEYTj9J3PMe4X HI+jaJm5mLntOa7lvn12HV54lLg9eJ8LV+LeGFl8R9EESVti+Gf4JkMofy2wwdgObq01oylc mJnhFIBbUI11r0XkWY5STgwBPh1jFG0Q6Q9Xa9jWH4qcL0ABIWYvAx/L5xSRfS50o+sNwU6s sitAj4xvkneC/opyjz68PFUBtnjCOP0B4fuNUekmBFVs8mYKJRxLZvjH99KosKHy7x9ekcYY 9TJfzbjcwmE2+yU2rUpS1GztCqQx0Ib2y7a3lHkMmU3z9KpWt+3ksVyecO901wha4Vet1q4f /JPb9vk6wLZsgKbbhlDONEesevDHfRKCi8f166EBDCLuUqKnjNo5n47PEc4/yrQoUByN8XlI 7aWF1VmGYucyvVeIOz9awO1iqIbHS2XDzrxM0bzYN+oKfASL3iNjDGYEwykuO7ys9vQfHzar KWAtZ7EvXjJWzhFcJixAvlQaRfLnEYTYk8pss7YVSTucjGQ7ea9tDzQbL2Hv7AADwkUmTwDj 8oRz7oPvhN6UitRzvWmx7Ud3TxelHu3J55HaTAltJjjbQlB8lpiEw4mF657saEJXlpqaotZn ZzJ7vhj+eaqACNjCL1xlQsHiAYIlde4b3mXX8PjxQNKVnIfbEKvMjaXWhT2XCANyJuVs++Kn 8Zm31HvYaMa7CAzyErDNyqdkiAiWEImX6MR5AA3oqO+NniYZF9Kpo9QqR+GUHqGnVO6EdXgV YGTDVBal7UFzvoh6ngpocTHvvje951hxruB9VVp3LZvUC1vtouWXMfYj6rXaes8EQTbgsRom c0374UgbKGlzrqA3A4mv4EPFpFb3nSPKhLFz2fZIJfmqnifSZ5SWviv03dtzgDPk7Rs2kCjG 3oKiOZPdXGGEBUtHxj3qH2y19sbWmGc0Vsand1jJ1lGQ39ywRO+N7OQpD2/3qaa1MEzO1YCj 3DbDcICi5Fxty81neu6Xy/PERj4q9rEv3WDbwlfb2W52ikL5eQk7oaW9VO+ox+CdzouugXcO 6WdgOPNgnkA+cx1wH9nAd9BABE7F0f1d/40hzs62a1mEMlCf3JOVJ8WvU1Jcqf42WMfYfA7L xJyfYO+c2+PWX6ZoTYleX5bztfJgjSpmDzZecyspxQtb8zsrw2P5Sza0q+6Fh3mDEFaOHznw ciZY4+xpbrEIpmZdYTdCJU5UBBrqXFEGIb9ijNRtYjdlQshULBN9yH47D0uaMia3fx0DfYCB 26yWlh5P/LUCuI6K4CB48xKWpQblIg6H4KxpL1S6TgTCGrffpE5ly0LzuUd6JcUrGMHdwr31 tHyuDNu++cbCzj3g/M+RN9P6JV6m6iBee/GhiFF+IN09u0Pz238+eXyf/2qDf8Uj2gbUsEwa VDaEwLd8xGzgAYs7df6Fn7doXH5mQ/k1Vf5jl7llninqieiV2rY31uAEn+mZVZXT5aL36Sq9 /KmNLojEjA3A==
  • Ironport-sdr: zDJ0HlNVHujnJwXvgm2eSkrwQ/AiXueS2QaF48kC9hz+cw5fTw10Et7geqz+LoOzdWdELanW8q pdB8UoQYZkIoHjX+BwSzD2EKTYFJjqo+3hOxhKjDLskn3yPdT2CDwS2AhrBzqv6CEuhJQN3uo7 5kbx/mO5kS0oTtf8aT+vBXBQH4rYwfmzMl8TkRwFC0DDgMZdKXAAYL7UQNq5WNPbEag8vHH2TD NxupiGl84xrhBOaNOwol9jPVq1TNXUq7Sj31p7RSlVfTA/wLsIcfC/DgIQZoPuU/wH58xcNjYi HVc=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 05/05/2021 09:19, Jan Beulich wrote:
> On 04.05.2021 15:58, Andrew Cooper wrote:
>> On 04/05/2021 13:43, Jan Beulich wrote:
>>> I'm also not happy at all to see you use a literal 3 here. We have
>>> a struct for this, after all.
>>>
>>>>          p->xstate.xss_low   =  xstates & XSTATE_XSAVES_ONLY;
>>>>          p->xstate.xss_high  = (xstates & XSTATE_XSAVES_ONLY) >> 32;
>>>>      }
>>>> -    else
>>>> -        xstates &= ~XSTATE_XSAVES_ONLY;
>>>>  
>>>> -    for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
>>>> +    /* Subleafs 2+ */
>>>> +    xstates &= ~XSTATE_FP_SSE;
>>>> +    BUILD_BUG_ON(ARRAY_SIZE(p->xstate.comp) < 63);
>>>> +    for_each_set_bit ( i, &xstates, 63 )
>>>>      {
>>>> -        uint64_t curr_xstate = 1ul << i;
>>>> -
>>>> -        if ( !(xstates & curr_xstate) )
>>>> -            continue;
>>>> -
>>>> -        p->xstate.comp[i].size   = xstate_sizes[i];
>>>> -        p->xstate.comp[i].offset = xstate_offsets[i];
>>>> -        p->xstate.comp[i].xss    = curr_xstate & XSTATE_XSAVES_ONLY;
>>>> -        p->xstate.comp[i].align  = curr_xstate & xstate_align;
>>>> +        /*
>>>> +         * Pass through size (eax) and offset (ebx) directly.  Visbility 
>>>> of
>>>> +         * attributes in ecx limited by visible features in Da1.
>>>> +         */
>>>> +        p->xstate.raw[i].a = raw_cpuid_policy.xstate.raw[i].a;
>>>> +        p->xstate.raw[i].b = raw_cpuid_policy.xstate.raw[i].b;
>>>> +        p->xstate.raw[i].c = raw_cpuid_policy.xstate.raw[i].c & ecx_bits;
>>> To me, going to raw[].{a,b,c,d} looks like a backwards move, to be
>>> honest. Both this and the literal 3 above make it harder to locate
>>> all the places that need changing if a new bit (like xfd) is to be
>>> added. It would be better if grep-ing for an existing field name
>>> (say "xss") would easily turn up all involved places.
>> It's specifically to reduce the number of areas needing editing when a
>> new state, and therefore the number of opportunities to screw things up.
>>
>> As said in the commit message, I'm not even convinced that the ecx_bits
>> mask is necessary, as new attributes only come in with new behaviours of
>> new state components.
>>
>> If we choose to skip the ecx masking, then this loop body becomes even
>> more simple.  Just p->xstate.raw[i] = raw_cpuid_policy.xstate.raw[i].
>>
>> Even if Intel do break with tradition, and retrofit new attributes into
>> existing subleafs, leaking them to guests won't cause anything to
>> explode (the bits are still reserved after all), and we can fix anything
>> necessary at that point.
> I don't think this would necessarily go without breakage. What if,
> assuming XFD support is in, an existing component got XFD sensitivity
> added to it?

I think that is exceedingly unlikely to happen.

>  If, like you were suggesting elsewhere, and like I had
> it initially, we used a build-time constant for XFD-affected
> components, we'd break consuming guests. The per-component XFD bit
> (just to again take as example) also isn't strictly speaking tied to
> the general XFD feature flag (but to me it makes sense for us to
> enforce respective consistency). Plus, in general, the moment a flag
> is no longer reserved in the spec, it is not reserved anywhere
> anymore: An aware (newer) guest running on unaware (older) Xen ought
> to still function correctly.

They're still technically reserved, because of the masking of the XFD
bit in the feature leaf.

However, pondered this for some time, we do need to retain the
attributes masking, because e.g. AMX && !XSAVEC is a legal (if very
unwise) combination to expose, and the align64 bits want to disappear
from the TILE state attributes.

Also, in terms of implementation, the easiest way to do something
plausible here is a dependency chain of XSAVE => XSAVEC (implies align64
masking) => XSAVES (implies xss masking) => CET_*.

XFD would depend on XSAVE, and would imply masking the xfd attribute.

This still leaves the broken corner case of the dynamic compressed size,
but I think I can live with that to avoid the added complexity of trying
to force XSAVEC == XSAVES.

~Andrew




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.