[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN][PATCH 3/3] x86/hvm: vmx: refactor cache disable mode data


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
  • Date: Thu, 30 Oct 2025 14:26:41 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vksVh06D3GYjpty2dbEQmNqF5w0G4jzayLz/+2Ebawg=; b=vp1XxlvpHTSnHR19UY6hjpFmNQ48u996VrbVCxlY9gL8/xzgEHseBXkILQX4GYdp5GdOyc/OfWaAuHW3wkZl+FkGprBfq2drkRtFv9YIF/s3SQMvwNE+HFMjdNUh1rkhvPi4Olvjci3zL/mwzRZmEUCB3cV+dUhz+sTEv0Dtyx4HQTBl1RtqCP/+Mw0Hmxrvau5cu3sskUh520w57CZWcyJINWcRgDT29qZEclW3u8K2v3U/hmAKUaADzvbPipJVJMWV3IJLGt9mRkqo5V0euabW2ApRo+GAtsr1jfZzrC9Wqx5ebtjnuUVkOn7PTowdcDL+1DHu0aWDoGuDHOPZMQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=icCmXs8JP07XEQhOWn82gyKk1d77Sx6SZGsoACSRcdeoKKhqNA5ZiymL/j2DyGVJA9r8qVnBikebwo0FCLc1BkFcKtqMxPfClpfhstWLByk+PworQupoMCzKtvMTPAlZi+XrXnaQCt805bu1FKNpA7vuh9Qlk8bkxGQEqlRkCJwuRdLgEIi9vERrThixemVjBAKU+obokEkh7UL5nIvHuNaap5SQPh9fvcsKCK4RCPCRDPr837PMuKPxawQELc3XWR/zc0xX+5Q1rvPPrWSckwoCwZy5wE4xJ83JiRuWzbCbiKbZpdehX23WlGXk9jdE0aPBi9ytvJYBtzLE3gW/gg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=epam.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Jason Andryuk <jason.andryuk@xxxxxxx>, Teddy Astie <teddy.astie@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 30 Oct 2025 12:27:03 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Jan,

On 30.10.25 13:23, Jan Beulich wrote:
On 30.10.2025 00:54, Grygorii Strashko wrote:
From: Grygorii Strashko <grygorii_strashko@xxxxxxxx>

The Cache Disable mode data is used only by VMX code, so move it from
common HVM structures into VMX specific structures:
- move "uc_lock", "is_in_uc_mode" fields from struct hvm_domain to struct
vmx_domain;
- move "cache_mode" field from struct hvm_vcpu to struct vmx_vcpu.

Hence, the "is_in_uc_mode" field is used directly in mm/shadow/multi.c
_sh_propagate(), introduce the hvm_is_in_uc_mode() macro to avoid direct
access to this field and account for INTEL_VMX configuration.

Signed-off-by: Grygorii Strashko <grygorii_strashko@xxxxxxxx>

Requested-by: Andrew ?

ok


--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -583,6 +583,7 @@ static int cf_check vmx_domain_initialise(struct domain *d)
      int rc;
d->arch.ctxt_switch = &csw;
+    spin_lock_init(&d->arch.hvm.vmx.uc_lock);

I don't think this is the best place; in any event it wants to be separated from
adjacent code by a blank line. I'd prefer if it was put ...

      /*
       * Work around CVE-2018-12207?  The hardware domain is already permitted

... below this CVE workaround.

ok


--- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
@@ -46,7 +46,9 @@ struct ept_data {
#define _VMX_DOMAIN_PML_ENABLED 0
  #define VMX_DOMAIN_PML_ENABLED     (1ul << _VMX_DOMAIN_PML_ENABLED)
+
  struct vmx_domain {
+    spinlock_t uc_lock;
      mfn_t apic_access_mfn;
      /* VMX_DOMAIN_* */
      unsigned int status;

Any reason to make this the very first field of the struct? It might better
live adjacent to the other field you move; there's going to be some padding
anyway, afaict.

I've tried to put fields in holes and checked with pahole.

With current change it is:
struct vmx_domain {
        spinlock_t                 uc_lock;              /*     0     8 */
        mfn_t                      apic_access_mfn;      /*     8     8 */
        unsigned int               status;               /*    16     4 */
        _Bool                      exec_sp;              /*    20     1 */
        _Bool                      is_in_uc_mode;        /*    21     1 */

        /* size: 24, cachelines: 1, members: 5 */
        /* padding: 2 */
        /* last cacheline: 24 bytes */
};

It seems can be grouped like below?:
struct vmx_domain {
        mfn_t                      apic_access_mfn;      /*     0     8 */
        unsigned int               status;               /*     8     4 */
        _Bool                      exec_sp;              /*    12     1 */
        _Bool                      is_in_uc_mode;        /*    13     1 */

        /* XXX 2 bytes hole, try to pack */

        spinlock_t                 uc_lock;              /*    16     8 */

        /* size: 24, cachelines: 1, members: 5 */
        /* sum members: 22, holes: 1, sum holes: 2 */
        /* last cacheline: 24 bytes */
};


@@ -56,6 +58,12 @@ struct vmx_domain {
       * around CVE-2018-12207 as appropriate.
       */
      bool exec_sp;
+    /*
+     * If one of vcpus of this domain is in no_fill_mode or
+     * mtrr/pat between vcpus is not the same, set is_in_uc_mode.
+     * Protected by uc_lock.
+     */
+    bool is_in_uc_mode;

Imo while moving, the is_ prefix could also be dropped. It doesn't convey any
extra information on top of the in_, and I think we prefer is_*() also as
typically function(-like) predicates. (I.e. in hvm_is_in_uc_mode() I'm fine
with the name.)

ok


@@ -93,6 +101,9 @@ struct pi_blocking_vcpu {
      spinlock_t           *lock;
  };
+#define NORMAL_CACHE_MODE 0
+#define NO_FILL_CACHE_MODE         2

As you necessarily touch all use sites, could we switch to the more common
CACHE_MODE_* at this occasion? Also imo these want to live ...

@@ -156,6 +167,9 @@ struct vmx_vcpu {
uint8_t lbr_flags; + /* Which cache mode is this VCPU in (CR0:CD/NW)? */
+    uint8_t              cache_mode;

... right next to the field they belong to.

ok.

--
Best regards,
-grygorii




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.