[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/cpuid: Clobber CPUID leaves 0x800000{1d..20}


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Date: Thu, 7 Apr 2022 15:00:23 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=W0SXZnqS7IACHqJsaqvS0txvblc45PE1ryirYDXysyY=; b=HDTklbn2Qr1WQ5zJEYJSwII8bIWJJbP7ByqPMukADRDaScQUuX7L4kgddjQDaZZat1kEhZjaH+hinhBPnYCB25AFoLbZ5wcMhL/pSPHz9VPjI8RabgQEF97k6c2GjS0E0QrWGF1lzpycbu7FdDE5nYRd6wwdy1j31H5yNolEi6ZsLmC3fVz7RfaBdEoMoZEA/XSPEjRyf1y/FBOAdhw2WGn4oC+brbVhxzHeEV/Cfe18NZutR8b23ElAxP+D287yxD7Whe1sDiDSOFSVc67wA+xSIUmSdCD/VQE5B/EUIVYTgcJZsOoqEFpqLHEPDCLvoqW9DWAAmMSuQfzd7sG5BA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gCaj/Be5ZGl5XeinTV95KAV4HKvX8d2ngoQm1uGyB/IjS2FGY+Zx/Cjz+/IvQmdG6tqMSNaS5Auc6K2ht3aHEjMYXXaFaTzj0BD39cmFjEYQs/07sG1aKOo38NJ9yYJMfDEhHQaae6oVaRFnWM/mVNTBX+O//J3WACsIroEB3bpneFpG7LtHI9HKpEdOAyB4jIaESn23s9fkit7C4q01VYgsexd+FP9ytdE9KdwJE76jSsWMUeacaT9nnrRHfcXD2XHGZJtJ9I7s1jSIN3vKUdHGxRRqzn5klcGx9ZaK2OUK7ubZFag5BAylyKHbO0siE3wcCukELXoTe878fdvSyQ==
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 07 Apr 2022 15:00:43 +0000
  • Ironport-data: A9a23:NMgve6uMNYSf6A2ZYz8e5cE6+ufnVEReMUV32f8akzHdYApBsoF/q tZmKWGFb/iDYmb8KNt2Oorko0ME7Z/VnNVkGQRsrXxnRikS+JbJXdiXEBz9bniYRiHhoOOLz Cm8hv3odp1coqr0/0/1WlTZhSAgk/nOHNIQMcacUsxLbVYMpBwJ1FQyw4bVvqYy2YLjW1/V5 ouoyyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo Oihu6BcRi8IJIHTmt4aXiMBOBl4MKt4v5ruI1aW5Jn7I03uKxMAwt1rBUAye4YZ5vx2ESdF8 vlwxDIlN07ZwbjsmfTiF7cq1p9LwMrDZevzvllJyz3DAOlgapfEW6jQvvdT3Ssqh9AIFvHbD yYcQWQ/MkSZOkQXUrsRIIBm39/02lP/Ty9JtXDLhZgH+0XaxxMkhdABN/KKI4fXFK25hH2wt m/Aumj0HBweHNie0iaetGKhgPfVmiH2U55UE6e3ntZ1hHWDy2pVDwcZPXO5q/Skjk+1W/pEN lcZvCEpqMAPGFeDF4enGUfi+Tjd40BaC4E4//AGBB+l67D4/ASjAWU+FxV5btEgiogTbD4E/ wrc9z/2PgBHvLqQQHOb076bqzKuJCQYRVM/iT84oRgtuIe6/txq5v7bZpM6SfPu0IWpcd3l6 2rSxBXSkYn/miLiO0+T2VncywyhqZHSJuLezlWGBzn1hu+ViWPMWmBJ1bQ5xasYRGp6ZgPY1 JThpyR4xLpTZX1qvHbQKNjh5Jnzu5643MT02DaD5aUJ+TW34GKEdotN+jx4L0oBGp9aJW6zM BKO6VwOvcY70J6WgUlfOd/Z5yMCl/aIKDgYfqqMMoomjmZZKmdrAx2ClWbPhjuwwSDAYIk0O IuBcNbEMJrpIf8P8dZCfM9EieVD7nlnnQv7HMmnpzz6gev2TCPEEt8tbQrRBt3VGYvZ+W05B f4EbJDUo/ieOcWjChTqHXk7cQhRfSdmX8ivwyGVH8baSjdb9KgaI6a56ZsqepB/nrQTkeHN/ 3qnXVRfxka5jnrCQThmoFg6NtsDgb4XQaoHABER
  • Ironport-hdrordr: A9a23:sSX74qNWZhhnRMBcT2/155DYdb4zR+YMi2TDiHofdfUFSKClfp 6V8cjzjSWE9Qr4WBkb6LW90DHpewKSyXcH2/hsAV7EZniphILIFvAv0WKG+VPd8kLFh5dgPM tbAstD4ZjLfCJHZKXBkUiF+rQbsaG6GcmT7I+0pRYMcegpUdAa0+4QMHfBLqQcfngjOXNNLu v72iMxnUvGRZ14VLXYOlA1G8z44/HbnpPvZhALQzQ97hOVsD+u4LnmVzCFwxY3SVp0sPcf2F mAtza8yrSosvm9xBOZ/XTU9Y5qlNzozcYGLNCQi/ISNi7nhm+TFcdcsvy5zXIISdOUmRIXee r30lAd1gNImjXsl1SO0F7QMs/boW8TAjHZuAelaDDY0LPErXoBerR8bMRiA0HkAgMbzaFBOO gg5RPpi7NHSRzHhyjz/N7OSlVjkVe1u2MrlaoJg2VYSpZ2Us4bkWUzxjIdLH47JlOz1GnnKp gbMOjMoPJNNV+KZXHQuWdihNSqQ3QoBx+DBkwPoNac3TRalG1wixJw/r1Tol4QsJYmD5VU7e XNNapl0LlIU88NdKp4QOMMW9G+BGDBSQ/FdGiSPVPkHqcaPG+lke+83JwloOWxPJAYxpo7n5 rMFFteqG4pYkrrTdaD2ZVamyq9NllVnQ6dvf22y6IJyIEUHoCbQhFrYGpe5vednw==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYShse3pTPowzhzUqzya5673XmyazkguyAgAAJR4A=
  • Thread-topic: [PATCH] x86/cpuid: Clobber CPUID leaves 0x800000{1d..20}

On 07/04/2022 15:27, Jan Beulich wrote:
> On 07.04.2022 03:01, Andrew Cooper wrote:
>> c/s 1a914256dca5 increased the AMD max leaf from 0x8000001c to 0x80000021, 
>> but
>> did not adjust anything in the calculate_*_policy() chain.  As a result, on
>> hardware supporting these leaves, we read the real hardware values into the
>> raw policy, then copy into host, and all the way into the PV/HVM default
>> policies.
>>
>> All 4 of these leaves have enable bits (first two by TopoExt, next by SEV,
>> next by PQOS), so any software following the rules is fine and will leave 
>> them
>> alone.  However, leaf 0x8000001d takes a subleaf input and at least two
>> userspace utilities have been observed to loop indefinitely under Xen 
>> (clearly
>> waiting for eax to report "no more cache levels").
>>
>> Such userspace is buggy, but Xen's behaviour isn't great either.
> Just another remark, since I stumbled across this again while preparing
> the backports: I'm not convinced such user space is to be called buggy.
> Generic CPUID dumping tools won't normally look for particular features.
> Their knowledge is usually limited to knowing where sub-leaves exist and
> how to determine how many of them there are. Anything beyond that would
> make a supposedly simple tool complicated.

It's basic input sanitisation.

If you have elected to ignore the rules AMD sets out to correctly
interpret the data, then you get to keep all the pieces when writing an
unbounded

do {
    x = read_untrusted_input();
} while ( x != 0 );

loop.  The only reason zeroing the data here unbreaks userspace is
because it aliases the loop exit condition.

~Andrew

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.