[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/9] Per vcpu vm_event channels



On Fri, 2019-05-31 at 17:25 -0700, Andrew Cooper wrote:
> On 30/05/2019 07:18, Petre Pircalabu wrote:
> > This patchset adds a new mechanism of sending synchronous vm_event
> > requests and handling vm_event responses without using a ring.
> > As each synchronous request pauses the vcpu until the corresponding
> > response is handled, it can be stored in a slotted memory buffer
> > (one per vcpu) shared between the hypervisor and the controlling
> > domain.
> > 
> > The main advantages of this approach are:
> > - the ability to dynamicaly allocate the necessary memory used to
> > hold
> > the requests/responses (the size of
> > vm_event_request_t/vm_event_response_t
> > can grow unrestricted by the ring's one page limitation)
> > - the ring's waitqueue logic is unnecessary in this case because
> > the
> > vcpu sending the request is blocked until a response is received.
> > 
> 
> Before I review patches 7-9 for more than stylistic things, can you
> briefly describe what's next?
> 
> AFACT, this introduces a second interface between Xen and the agent,
> which is limited to synchronous events only, and exclusively uses
> slotted system per vcpu, with a per-vcpu event channel?

Using a distinct interface was proposed by George in order to allow the
existing vm_event clients to run unmodified.
> 
> What (if any) are the future development plans, and what are the
> plans
> for deprecating the use of the old interface?  (The answers to these
> will affect my review of the new interface).
> 
> ~Andrew
> 
At the moment, we're only using sync vm_events, so the "one slot per
vcpu" approach suits us. Also, by allocating dynamically the
vm_event_requests/responses, we can increase their size without
suffering the performance drop incurred when using the ring
(+waitqueue).
At this moment, we don't have a schedule to deprecate the legacy (ring
based) interface, but we will adapt the new interface based on the
feedback we receive from other vm_event users. 

> ________________________
> This email was scanned by Bitdefender

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.