[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] x86/cpu/intel: Clear cache self-snoop capability in CPUs with known errata


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Thu, 18 Jul 2019 13:07:19 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1iE8oRu2KShXQklDKQVC8+SgicMoWfy/LJz8l6mN6VE=; b=Axz33imlbWp8FRbBdYvnXzRBuYcq46oawCdP5H9nRAy4zVf5lakOOP5AnUBpmlUdWcBNp/k9KPVw9yGosStk1mM7kjpTQO+11MDcopqgl+c8OZPQLk9rvYHJwnqKpVMUvhXVWII5VyUBGFFNChezIgyvHJH2VvDaJrEe39/mUS8vf1T8a3I7nZIcR8BtaYQLLc03j9YccRK2OKn6BGIbdvFud+j1dDnKQMUTsbhqaaUGsX//Qa4SifZ6rFuA15gM3xtsSGYbQLPGpbWf72/5uJ+KWMhnxeY2PibZi2kGFSjybkax3zYgJlzVhhwXAgbyR+zsGN6rWU172KEFU5mk9w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HKs5RNAQqcWOYjUREf97SgForimAkyaD4WNc4uUAngUdbu1bSDgdd7nTzgFOc1TMfQQXviPWZHGdLJsEX7LkxEcDPP8JJyYLIa7Dp4UpZw5KbAk6cJhJ0zThKzHGdvrvEmgWIcKoPWfV/K8DamQVHLrUvFAP5XQaAqiRtduM4cixa8langBicWqZNAn5BLRWhCuZyf2LvXFr2QI46IGkTTXDrAnSixJTx14M6sRZzIJTFYc+XvhE8QzNwtx+Yc6YTd4+1yHvkw5MDlK0yXUaJFm3SZPIyy4KTzi5dyGPHQiwl7L26G3WwEKVMY0QjZnkU6JMzExMfA/lxDYmC3giDw==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 18 Jul 2019 13:08:03 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVPWGy1ek8AecLnECtUEoEAro4wqbQTIu9gAAMIwA=
  • Thread-topic: [PATCH 1/2] x86/cpu/intel: Clear cache self-snoop capability in CPUs with known errata

On 18.07.2019 14:23, Andrew Cooper wrote:
> On 18/07/2019 13:09, Jan Beulich wrote:
>> --- a/xen/arch/x86/cpu/intel.c
>> +++ b/xen/arch/x86/cpu/intel.c
>> @@ -15,6 +15,32 @@
>>    #include "cpu.h"
>>    
>>    /*
>> + * Processors which have self-snooping capability can handle conflicting
>> + * memory type across CPUs by snooping its own cache. However, there exists
>> + * CPU models in which having conflicting memory types still leads to
>> + * unpredictable behavior, machine check errors, or hangs. Clear this
>> + * feature to prevent its use on machines with known erratas.
>> + */
>> +static void __init check_memory_type_self_snoop_errata(void)
>> +{
>> +    switch (boot_cpu_data.x86_model) {
>> +    case 0x0f: /* Merom */
>> +    case 0x16: /* Merom L */
>> +    case 0x17: /* Penryn */
>> +    case 0x1d: /* Dunnington */
>> +    case 0x1e: /* Nehalem */
>> +    case 0x1f: /* Auburndale / Havendale */
>> +    case 0x1a: /* Nehalem EP */
>> +    case 0x2e: /* Nehalem EX */
>> +    case 0x25: /* Westmere */
>> +    case 0x2c: /* Westmere EP */
>> +    case 0x2a: /* SandyBridge */
> 
> It would have been nice if the errata had actually been identified...

Indeed; I hope you don't expect me to go hunt them down. I'm
cloning a Linux commit here only, after all.

>> +            setup_clear_cpu_cap(X86_FEATURE_SS);
> 
> I'm regretting exposing SS to guests at this point.
> 
> As this stands, it will result in a migration compatibility issue,
> because updating Xen will cause a feature to disappear.  If we had a
> default vs full policy split, this would be easy enough to work around
> in a compatible way.  I wonder if there is anything clever we can do in
> the meantime as a stopgap workaround.

Should we perhaps introduce X86_FEATURE_XEN_SELFSNOOP, just like
we do for SMEP and SMAP, such that we can leave the real one alone?

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.