[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/hvm: Widen condition for is_hvm_pv_evtchn_vcpu()


  • To: Jane Malalane <jane.malalane@xxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 13 May 2022 17:39:58 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fwyRRLHFwC9ItfPVSVYVGnUiB7/gbhJ57y1bXrCevHk=; b=JIfsOmcOHiqifTaKRv9S4+M75e3bdmobiCWJ4Q0r9uov8dWrKkoKyom86LRyy8qQKjRPAvsNTvD5Hu7L0t1iHkWSjI+6rMsN0qmwH8dq1g5HGv4tcsQc05ETwH5YEQH9IYTuWYD6E8Sy2LOd/iqoMYz2Urv017gXAn8KDl+7oKAOXtyoDlZWNxec+nbJ/qhcs5eAPTySbHYVA1Lr5er5mv/aUy7tmErTM6/37zRWvgRS2chLAf1iQJM3WmVHJY4cX2uwDbjYObtxp+N+IVzxMT6gmMQI3z9UlxeM/ykMuvzkkNwEFqddM27/CnQ68K60win6GIkPXp3Dcmxjhhhkig==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=m0KtStMaymITWg7PoJLpjP3iicMRmRVrBI/KNOCE9I0PSi4goaaG/tZE55BZXG6IncaGfKMq1xtavdnW6wUBqzOBMt0Hzy4dXavZRg2LzTqi9nJDO42zK4OYnWOquh/EVH9Gk43jHZ97CgyitEEqYdMMpmsez3dK8zjkTJjgZCFoR+nJIIP0OXgzfIhTziZS9AV6A7nXS3kW+ilQIa4jifx9N3X+2CETwPFV15r3wfBQAZf4a9GYWfCzsVe+PJXluUrOSOs1nthMAtVzq3abVy2RYYxM7GLgV4XF00iUwI8HcRiuiBFjzViGUBS+y9HIO2J2Q7aCin9p6HK4ZE9Pnw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Fri, 13 May 2022 15:40:25 +0000
  • Ironport-data: A9a23:sXnuDK2cLaFwMpM06/bD5aZwkn2cJEfYwER7XKvMYLTBsI5bp2MCy WIdDTyCOa3cYmSgKNEjaNzno0hSsMfWmoA2GwBtpC1hF35El5HIVI+TRqvS04J+DSFhoGZPt Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1EU/NtTo5w7Rj2tMx2oDja++wk YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx /11uoyAaR0THJTAxvsYVzBqUAZeF7xJreqvzXiX6aR/zmXgWl61mbBLMxtzOocVvOFqHWtJ6 PoUbigXaQyOjP63x7T9TfRwgsMkL4/gO4Z3VnNIlGmFS6p5B86dBfmWjTNb9G5YasRmB/HRa tBfcTNyRB/BfwdOKhEcD5dWcOKA2SChLWMG9Ar9Sawf53fL009ciJbUHOWFcISpTpRFoUe4q TeTl4j+KlRAXDCF8hKV/3TpiuLRkCfTXIMJCKb+5vNsmEeUxGEYFFsRT1TTifuzh1O6WtlfA 1cJ4Sdopq83nGS0SvHtUhv+p2SL1iPwQPJVGuw+rQSSkKzd5l/DAnBeFmIdLts7qMUxWDomk EeTmM/kDiBut7vTTm+B8rCTrnW5Pi19wXI+WBLohDAtu7HLyLzfRDqWJjq/OMZZVuHIJAw=
  • Ironport-hdrordr: A9a23:6LipdaOZeviyM8BcT1P155DYdb4zR+YMi2TDiHoddfUFSKalfp 6V98jztSWatN/eYgBEpTmlAtj5fZq6z+8P3WBxB8baYOCCggeVxe5ZjbcKrweQeBEWs9Qtr5 uIEJIOd+EYb2IK6voSiTPQe7hA/DDEytHPuQ639QYQcegAUdAF0+4WMHf4LqUgLzM2eKbRWa Dsr/Zvln6FQzA6f867Dn4KU6zqoMDKrovvZVojCwQ84AeDoDu04PqieiLolSs2Yndq+/MP4G LFmwv26uGKtOy68AbV0yv2445NkNXs59NfDIini9QTKB/rlgG0Db4RE4GqjXQQmqWC+VwqmN 7Dr1MJONly0WrYeiWPrR7ky2DboUITwk6n7WXdrWrooMT/Sj5/IdFGn5hlfhzQ7FdllM1g0Y pQtljp+KZ/PFflpmDQ9tLIXxZlmg6funw5i9MeiHRZTM83dKJRl4oC50lYea1wUB4S0LpXUd WGMfuspMq/KTihHjPkVyhUsZGRt00Ib1m7qhNogL3W79BU9EoJunfwivZv20voz6hNOqWs19 60TJiAq4s+PvP+TZgNc9vpEvHHfFAkf3r3QRGvCGWiMp07EFTwjLOyyIkJxYiRCe41Jd0J6d 78bG8=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, May 11, 2022 at 04:14:23PM +0100, Jane Malalane wrote:
> Have is_hvm_pv_evtchn_vcpu() return true for vector callbacks for
> evtchn delivery set up on a per-vCPU basis via
> HVMOP_set_evtchn_upcall_vector.
> 
> is_hvm_pv_evtchn_vcpu() returning true is a condition for setting up
> physical IRQ to event channel mappings.

I would add something like:

The naming of the CPUID bit is a bit generic about upcall support
being available.  That's done so that the define name doesn't get
overly long like XEN_HVM_CPUID_UPCALL_VECTOR_SUPPORTS_PIRQ or some
such.

Guests that don't care about physical interrupts routed over event
channels can just test for the availability of the hypercall directly
(HVMOP_set_evtchn_upcall_vector) without checking the CPUID bit.

> 
> Signed-off-by: Jane Malalane <jane.malalane@xxxxxxxxxx>
> ---
> CC: Jan Beulich <jbeulich@xxxxxxxx>
> CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> CC: "Roger Pau Monné" <roger.pau@xxxxxxxxxx>
> CC: Wei Liu <wl@xxxxxxx>
> ---
>  xen/arch/x86/include/asm/domain.h   | 8 +++++++-
>  xen/arch/x86/traps.c                | 3 +++
>  xen/include/public/arch-x86/cpuid.h | 2 ++
>  3 files changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/include/asm/domain.h 
> b/xen/arch/x86/include/asm/domain.h
> index 35898d725f..f044e0a492 100644
> --- a/xen/arch/x86/include/asm/domain.h
> +++ b/xen/arch/x86/include/asm/domain.h
> @@ -14,8 +14,14 @@
>  
>  #define has_32bit_shinfo(d)    ((d)->arch.has_32bit_shinfo)
>  
> +/*
> + * Set to true if either the global vector-type callback or per-vCPU
> + * LAPIC vectors are used. Assume all vCPUs will use

I think you should remove LAPIC here.  There's no such thing as 'LAPIC
vectors', it's just that the old mechanism was bypassing the lapic
EOI.

> + * HVMOP_set_evtchn_upcall_vector as long as the initial vCPU does.
> + */
>  #define is_hvm_pv_evtchn_domain(d) (is_hvm_domain(d) && \
> -        (d)->arch.hvm.irq->callback_via_type == HVMIRQ_callback_vector)
> +        ((d)->arch.hvm.irq->callback_via_type == HVMIRQ_callback_vector || \
> +         (d)->vcpu[0]->arch.hvm.evtchn_upcall_vector))
>  #define is_hvm_pv_evtchn_vcpu(v) (is_hvm_pv_evtchn_domain(v->domain))
>  #define is_domain_direct_mapped(d) ((void)(d), 0)
>  
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 25bffe47d7..2c51faab2c 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1152,6 +1152,9 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, 
> uint32_t leaf,
>          res->a |= XEN_HVM_CPUID_DOMID_PRESENT;
>          res->c = d->domain_id;
>  
> +        /* Per-vCPU event channel upcalls are implemented. */

... are implemented and work correctly with PIRQs routed over event
channels.

> +        res->a |= XEN_HVM_CPUID_UPCALL_VECTOR;
> +
>          break;
>  
>      case 5: /* PV-specific parameters */
> diff --git a/xen/include/public/arch-x86/cpuid.h 
> b/xen/include/public/arch-x86/cpuid.h
> index f2b2b3632c..1760e2c405 100644
> --- a/xen/include/public/arch-x86/cpuid.h
> +++ b/xen/include/public/arch-x86/cpuid.h
> @@ -109,6 +109,8 @@
>   * field from 8 to 15 bits, allowing to target APIC IDs up 32768.
>   */
>  #define XEN_HVM_CPUID_EXT_DEST_ID      (1u << 5)
> +/* Per-vCPU event channel upcalls. */

I would maybe expand the message to:

"Per-vCPU event channel upcalls work correctly with physical IRQs bound
to event channels."

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.