[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 01 of 12] xenpaging: correct dropping of pages to avoid full ring buffer
# HG changeset patch # User Olaf Hering <olaf@xxxxxxxxx> # Date 1307437209 -7200 # Node ID 6b8446bf4e5fbfa93169ec2509364c0fde74beca # Parent c231a26a29327aa3c737170e04c738289be2d309 xenpaging: correct dropping of pages to avoid full ring buffer Doing a one-way channel from Xen to xenpaging is not possible with the current ring buffer implementation. xenpaging uses the mem_event ring buffer, which expects request/response pairs to make progress. The previous patch, which tried to establish a one-way communication from Xen to xenpaging, stalled the guest once the buffer was filled up with requests. Correct page-dropping by taking the slow path and let p2m_mem_paging_resume() consume the response from xenpaging. This makes room for yet another request/response pair and avoids hanging guests. Signed-off-by: Olaf Hering <olaf@xxxxxxxxx> diff -r c231a26a2932 -r 6b8446bf4e5f tools/xenpaging/xenpaging.c --- a/tools/xenpaging/xenpaging.c Mon Jun 06 09:56:08 2011 +0100 +++ b/tools/xenpaging/xenpaging.c Tue Jun 07 11:00:09 2011 +0200 @@ -653,19 +653,19 @@ int main(int argc, char *argv[]) ERROR("Error populating page"); goto out; } + } - /* Prepare the response */ - rsp.gfn = req.gfn; - rsp.p2mt = req.p2mt; - rsp.vcpu_id = req.vcpu_id; - rsp.flags = req.flags; + /* Prepare the response */ + rsp.gfn = req.gfn; + rsp.p2mt = req.p2mt; + rsp.vcpu_id = req.vcpu_id; + rsp.flags = req.flags; - rc = xenpaging_resume_page(paging, &rsp, 1); - if ( rc != 0 ) - { - ERROR("Error resuming page"); - goto out; - } + rc = xenpaging_resume_page(paging, &rsp, 1); + if ( rc != 0 ) + { + ERROR("Error resuming page"); + goto out; } /* Evict a new page to replace the one we just paged in */ _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |