[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v2 04/26] xen: consolidate CONFIG_VM_EVENT


  • To: Jan Beulich <jbeulich@xxxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: "Penny, Zheng" <penny.zheng@xxxxxxx>
  • Date: Thu, 11 Sep 2025 09:20:49 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sa6DXcd7FYmvoF0+8vlTtfuH1T4DMtYcPTZioNW9epk=; b=PMRS34/msLyk3nfw4/XE2qWA81Qu6fxjG1UlFP12YegZl8Mm6xVUPnPKy+6qj25dxmkVCEPOb0AS49kAF8rYaWCmH9ETcZ6fh9RmWu8sbCzDyItinqkRAhIw4a3UXtWRfq0hR0HgkI1OLDKcW4Mivr/AqnoYMLiYM/qiEtz89UBHBNIp2ALsRIh7EZdSFm2RiBaMZ+ddsPoHBTw9HKjfpvE6hoBvwMrcoDVHQfHLCi9g81S8EheTSQbRatVJURhpYfXCf6MNMmXXybWDxJyR1+2aT/Czc4q8SE5D0b91KG7nFWFfHmeZz+R47c7QFd2NOe/52lG4Thc1qXLUO2pvlw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=TXBmDHkyhpG2/D11sf4H0IFpUsyomglLFxp4pUrP76QCjgn7bhShIKd+mdjmNsURpiWbCYwy8uL5kd2epubte6EuFYy6ZZkWg0Ebl6/jLoAg2700z4png5LtjpAKFDAx5Hyi/fFV2fkmWHhPSasJxX71173+xeVyAzOVIihm7mSLR1CIMYaWZb6p6E38Uu8jQ6YnCkTbtLRZhZOASG9QmUYYUDCFOKRF3bo7vTpp+IRWGCdmFwsXEizR1Td8+E3OGRggIWlqnt5XDbuoP1KUvSi2s4lvHaAO/7gCBq+1ZCESSvxydSdjQjUtli+Y4N9hcCEGhRbm0CSrWg+bbH4SsA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: "Huang, Ray" <Ray.Huang@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>, Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>, "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 11 Sep 2025 09:21:02 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Msip_labels: MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Enabled=True;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_SetDate=2025-09-11T09:20:43.0000000Z;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Name=Open Source;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_ContentBits=3;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Method=Privileged
  • Thread-index: AQHcIiYDHxl9uXccpkex+vWb0p4h67SMggMAgAEx1KA=
  • Thread-topic: [PATCH v2 04/26] xen: consolidate CONFIG_VM_EVENT

[Public]

> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: Wednesday, September 10, 2025 10:57 PM
> To: Penny, Zheng <penny.zheng@xxxxxxx>; Tamas K Lengyel
> <tamas@xxxxxxxxxxxxx>
> Cc: Huang, Ray <Ray.Huang@xxxxxxx>; Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx>; Roger Pau Monné <roger.pau@xxxxxxxxxx>;
> Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>; Petre Pircalabu
> <ppircalabu@xxxxxxxxxxxxxxx>; Daniel P. Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx>;
> xen-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [PATCH v2 04/26] xen: consolidate CONFIG_VM_EVENT
>
> On 10.09.2025 09:38, Penny Zheng wrote:
>
> > --- a/xen/include/xen/vm_event.h
> > +++ b/xen/include/xen/vm_event.h
> > @@ -50,6 +50,7 @@ struct vm_event_domain
> >      unsigned int last_vcpu_wake_up;
> >  };
> >
> > +#ifdef CONFIG_VM_EVENT
> >  /* Returns whether a ring has been set up */  bool
> > vm_event_check_ring(struct vm_event_domain *ved);
> >
> > @@ -68,6 +69,20 @@ bool vm_event_check_ring(struct vm_event_domain
> *ved);
> >   */
> >  int __vm_event_claim_slot(struct domain *d, struct vm_event_domain *ved,
> >                            bool allow_sleep);
> > +#else
> > +static inline bool vm_event_check_ring(struct vm_event_domain *ved) {
> > +    return false;
> > +}
>
> Which call site is in need of this stub? I was first considering
> mem_paging_enabled(), but MEM_PAGING already now depends on VM_EVENT.
>

It is used in hvm.c to check whether vm_event_share ring is empty. And it has 
the same problem as the below: whether we support the configuration: VM_EVENT=n 
and MEM_SHARING=y. I'm not very familiar with it and may need help on it.
If the combination is not supported, I suggest to make MEM_SHARING depend on 
VM_EVENT, most of the below stubs could be removed.

> > +static inline int __vm_event_claim_slot(struct domain *d,
> > +                                        struct vm_event_domain *ved,
> > +                                        bool allow_sleep) {
> > +    return -EOPNOTSUPP;
> > +}
>
> Sadly this looks to be needed when MEM_SHARING=y and VM_EVENT=n.
>
> > @@ -82,23 +97,28 @@ static inline int
> > vm_event_claim_slot_nosleep(struct domain *d,
> >
> >  void vm_event_cancel_slot(struct domain *d, struct vm_event_domain
> > *ved);
> >
> > +#ifdef CONFIG_VM_EVENT
> >  void vm_event_put_request(struct domain *d, struct vm_event_domain *ved,
> >                            vm_event_request_t *req);
> >
> > -#ifdef CONFIG_VM_EVENT
> >  /* Clean up on domain destruction */
> >  void vm_event_cleanup(struct domain *d);  int vm_event_domctl(struct
> > domain *d, struct xen_domctl_vm_event_op *vec);
> > +
> > +void vm_event_vcpu_pause(struct vcpu *v);
> >  #else /* !CONFIG_VM_EVENT */
> > +static inline void vm_event_put_request(struct domain *d,
> > +                                        struct vm_event_domain *ved,
> > +                                        vm_event_request_t *req) {}
>
> Same here and ...
>
> >  static inline void vm_event_cleanup(struct domain *d) {}  static
> > inline int vm_event_domctl(struct domain *d,
> >                                    struct xen_domctl_vm_event_op *vec)
> > {
> >      return -EOPNOTSUPP;
> >  }
> > +static inline void vm_event_vcpu_pause(struct vcpu *v) {};
>
> ... here.
>
> >  #endif /* !CONFIG_VM_EVENT */
> >
> Jan

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.