[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] public/hvm: Export the HVM_PARAM_CALLBACK_VIA ABI in the API



> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> Sent: 24 November 2015 10:29
> To: Paul Durrant; Jan Beulich
> Cc: Keir (Xen.org); Ian Campbell; Tim (Xen.org); Ian Jackson; Xen-devel;
> Stefano Stabellini; Shannon Zhao
> Subject: Re: [Xen-devel] [PATCH] public/hvm: Export the
> HVM_PARAM_CALLBACK_VIA ABI in the API
> 
> On 24/11/15 09:56, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-
> >> bounces@xxxxxxxxxxxxx] On Behalf Of Jan Beulich
> >> Sent: 24 November 2015 07:33
> >> To: Andrew Cooper
> >> Cc: Keir (Xen.org); Ian Campbell; Tim (Xen.org); Ian Jackson; Xen-devel;
> >> Stefano Stabellini; Shannon Zhao
> >> Subject: Re: [Xen-devel] [PATCH] public/hvm: Export the
> >> HVM_PARAM_CALLBACK_VIA ABI in the API
> >>
> >>>>> On 20.11.15 at 19:20, <andrew.cooper3@xxxxxxxxxx> wrote:
> >>> --- a/xen/include/public/hvm/params.h
> >>> +++ b/xen/include/public/hvm/params.h
> >>> @@ -29,18 +29,21 @@
> >>>   * Parameter space for HVMOP_{set,get}_param.
> >>>   */
> >>>
> >>> +#define HVM_PARAM_CALLBACK_IRQ 0
> >>> +
> >>>  /*
> >>>   * How should CPU0 event-channel notifications be delivered?
> >>> - * val[63:56] == 0: val[55:0] is a delivery GSI (Global System 
> >>> Interrupt).
> >>> - * val[63:56] == 1: val[55:0] is a delivery PCI INTx line, as follows:
> >>> - *                  Domain = val[47:32], Bus  = val[31:16],
> >>> - *                  DevFn  = val[15: 8], IntX = val[ 1: 0]
> >>> - * val[63:56] == 2: val[7:0] is a vector number, check for
> >>> - *                  XENFEAT_hvm_callback_vector to know if this delivery
> >>> - *                  method is available.
> >>> + *
> >>>   * If val == 0 then CPU0 event-channel notifications are not delivered.
> >>> + * If val != 0, val[63:56] encodes the type, as follows:
> >>>   */
> >>> -#define HVM_PARAM_CALLBACK_IRQ 0
> >>> +#define HVM_PARAM_CALLBACK_TYPE_GSI      0 /* val[55:0] is a
> delivery
> >> GSI */
> >>> +#define HVM_PARAM_CALLBACK_TYPE_PCI_INTX 1 /* val[55:0] is a
> >> delivery PCI INTx line:
> >>> +                                              Domain = val[47:32], Bus  
> >>> = val[31:16],
> >>> +                                              DevFn  = val[15: 8], IntX 
> >>> = val[ 1: 0] */
> >>> +#define HVM_PARAM_CALLBACK_TYPE_VECTOR   2 /* val[7:0] is a
> vector
> >> number, check for
> >>> +                                              
> >>> XENFEAT_hvm_callback_vector to know if
> >>> +                                              this delivery method is 
> >>> available. */
> >> Would you not think that while cleaning this up, doing some
> >> re-formatting would also help? I.e. Putting the comments on lines of
> >> their own instead of at the end (even more deeply indented than
> >> they were before)? Also the first two of these last three comments
> >> are missing full stops.
> >>
> >> Also considering
> >>
> >>  * Attempting to use GSI 0 will actually disable callbacks
> >>
> >> in your patch comments, I'd suggest extending the GSI comment to
> >> state that it has to be a non-zero one.
> >>
> > Could we also perhaps deprecate this whole mechanism at this point? I
> added HVMOP_set_evtchn_upcall_vector quite a while ago now.
> 
> January, so Xen 4.6 is the only release with this hvmop in.
> 
> The issue with deprecating HVM_PARAM_CALLBACK_IRQ is that DMLite has
> no
> LAPIC, so can't use HVMOP_set_evtchn_upcall_vector.
> 
> Much as this ABI is horrible and nasty (and really should be
> deprecated), there are not alternatives which cover the common uses.
>

Yes, I realise the lack of LAPIC means that the HVMOP can't be used in that 
case, but I really really think that a new hypercall would be preferable to 
perpetuating the use of the callback_vector type with this param.

  Paul

 
> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.