[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/mm: Improve ring management for memory events. Do not lose guest events



At 10:39 -0500 on 16 Jan (1326710344), Andres Lagar-Cavilla wrote:
> This patch is an amalgamation of the work done by Olaf Hering <olaf@xxxxxxxxx>
> and our work.
> 
> It combines logic changes to simplify the memory event API, as well as
> leveraging wait queues to deal with extreme conditions in which too many 
> events are
> generated by a guest vcpu.
> 
> In order to generate a new event, a slot in the ring is claimed. If a guest 
> vcpu
> is generating the event and there is no space, it is put on a wait queue. If a
> foreign vcpu is generating the event and there is no space, the vcpu is 
> expected
> to retry its operation. If an error happens later, the function returns the
> claimed slot via a cancel operation.
> 
> Thus, the API has only four calls: claim slot, cancel claimed slot, put 
> request
> in the ring, consume the response.
> 
> With all these mechanisms, no guest events are lost.
> Our testing includes 1. ballooning down 512 MiBs; 2. using mem access n2rwx, 
> in
> which every page access in a four-vCPU guest results in an event, with no vCPU
> pausing, and the four vCPUs touching all RAM. No guest events were lost in
> either case, and qemu-dm had no mapping problems.

Applied, thanks.  I made two changes, both suggested by Olaf: 
 - moved the lock init up in mem_event_enable(); and
 - reverted p2m_mem_paging_populate() to return void, as none of the 
   callers had been changed to care about its new return value. 

In general, the callers of p2m_mem_paging_populate() are still a bit of
a mess; that should all be happening behind the p2m interface.  But
that's for another time...

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.