WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Root cause of the issue that HVM guest boots slowly with

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Root cause of the issue that HVM guest boots slowly with pvops dom0
From: "Yang, Xiaowei" <xiaowei.yang@xxxxxxxxx>
Date: Fri, 22 Jan 2010 16:07:41 +0800
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 22 Jan 2010 00:09:13 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C77DE99C.6F98%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: pdsmsx601.ccr.corp.intel.com
References: <C77DE99C.6F98%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.23 (X11/20090817)
Keir Fraser wrote:
On 21/01/2010 09:27, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:

A pre-mlock()ed memory page for small (sub-page) hypercalls? Protected with
a semaphore: failure to acquire semaphore means take slow path. Have all
hypercallers in libxc launder their data buffers through a new interface
that tries to grab and copy into the pre-allocated buffer.
I'll sort out a trial patch for this myself.

How does the attached patch work for you? It ought to get you the same
speedup as your hack.

The speed should be almost the same, regardless of twice memcpy.

Some comments to your trial patch:
1.
diff -r 6b61ef936e69 tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c  Fri Jan 22 14:50:30 2010 +0800
+++ b/tools/libxc/xc_private.c  Fri Jan 22 15:32:48 2010 +0800
@@ -188,7 +188,10 @@
          ((hcall_buf = calloc(1, sizeof(*hcall_buf))) != NULL) )
         pthread_setspecific(hcall_buf_pkey, hcall_buf);
     if ( hcall_buf->buf == NULL )
+    {
         hcall_buf->buf = xc_memalign(PAGE_SIZE, PAGE_SIZE);
+        lock_pages(hcall_buf->buf, PAGE_SIZE);
+    }

     if ( (len < PAGE_SIZE) && hcall_buf && hcall_buf->buf &&
          !hcall_buf->oldbuf )


2. _xc_clean_hcall_buf needs a more careful NULL pointer check.

3. It does modification to 5 out of 73 hypercalls invoking mlock. Other problem hypercalls could turn out to be the bottleneck later?:)

Thanks,
xiaowei

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel