[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] x86/vmx: enable PML by default



Since PML series were merged (but disabled by default) we have conducted lots of
PML tests (live migration, GUI display) and PML has been working fine, therefore
turn it on by default.

Signed-off-by: Kai Huang <kai.huang@xxxxxxxxxxxxxxx>
Tested-by: Robert Hu <robert.hu@xxxxxxxxx>
Tested-by: Xudong Hao <xudong.hao@xxxxxxxxx>
---

In case you might want some specific performance data to better get convinced to
turn PML on by default, I pasted the specjbb performance data (running in guest
which was in log-dirty mode) I gathered when I was posting the PML patch series
to xen-devel mailing list for review for your reference.

======================== specjbb performance ===========================

I measured specjbb performance in guest when guest is in video ram tracking mode
(the most usual case I think), and when guest is in global log-dirty mode (I
made some change in XL tool to put guest global log-dirty mode infinitely, see
below), from which we can see that PML does improved the specjbb performance in
guest while guest is in log-dirty mode, and the more frequently dirty pages are
queried, the more performance gain we will have. So while PML probably can't
speed up live migration process directly, it will be benefical for use cases
such as guest memory dirty speed monitoring.

- video ram tracking:

    WP              PML         
    122805          123887
    120792          123249
    118577          123348
    121856          125195
    121286          122056
    120139          123037

avg 120909          123462      
    
    100%            102.11%    

performance gain:   2.11%                 

- global log-dirty:

    WP              PML
    72862           79511
    73466           81173
    72989           81177
    73138           81777
    72811           80257
    72486           80413

avg 72959           80718
    100%            110.63%

performance gain: 10.63%

Test machine: Boardwell server with 16 CPUs (1.6G) + 4G memory.
Xen hypervisor: lastest upstream Xen
dom0 kernel: 3.16.0
guest: 4 vcpus + 1G memory.
guest os: ubuntu 14.04 with 3.13.0-24-generic kernel.

---
 xen/arch/x86/hvm/vmx/vmcs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 7a7896e..dbf284d 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -64,7 +64,7 @@ integer_param("ple_gap", ple_gap);
 static unsigned int __read_mostly ple_window = 4096;
 integer_param("ple_window", ple_window);
 
-static bool_t __read_mostly opt_pml_enabled = 0;
+static bool_t __read_mostly opt_pml_enabled = 1;
 static s8 __read_mostly opt_ept_ad = -1;
 
 /*
-- 
2.5.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.