[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Root cause of the issue that HVM guest boots slowly with pvops dom0



Keir Fraser wrote:
On 21/01/2010 09:27, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:

A pre-mlock()ed memory page for small (sub-page) hypercalls? Protected with
a semaphore: failure to acquire semaphore means take slow path. Have all
hypercallers in libxc launder their data buffers through a new interface
that tries to grab and copy into the pre-allocated buffer.
I'll sort out a trial patch for this myself.

How does the attached patch work for you? It ought to get you the same
speedup as your hack.

The speed should be almost the same, regardless of twice memcpy.

Some comments to your trial patch:
1.
diff -r 6b61ef936e69 tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c  Fri Jan 22 14:50:30 2010 +0800
+++ b/tools/libxc/xc_private.c  Fri Jan 22 15:32:48 2010 +0800
@@ -188,7 +188,10 @@
          ((hcall_buf = calloc(1, sizeof(*hcall_buf))) != NULL) )
         pthread_setspecific(hcall_buf_pkey, hcall_buf);
     if ( hcall_buf->buf == NULL )
+    {
         hcall_buf->buf = xc_memalign(PAGE_SIZE, PAGE_SIZE);
+        lock_pages(hcall_buf->buf, PAGE_SIZE);
+    }

     if ( (len < PAGE_SIZE) && hcall_buf && hcall_buf->buf &&
          !hcall_buf->oldbuf )


2. _xc_clean_hcall_buf needs a more careful NULL pointer check.

3. It does modification to 5 out of 73 hypercalls invoking mlock. Other problem hypercalls could turn out to be the bottleneck later?:)

Thanks,
xiaowei

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.