[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] VM memory allocation speed with cs 26056



Hi Keir/Jan,

Recently I got a chance to access a big machine (2T mem/160 cpus) and I tested
your patch: http://xenbits.xen.org/hg/xen-unstable.hg/rev/177fdda0be56

Attached is the result.

Test environment old:

      # xm info
      host                   : ovs-3f-9e-04
      release                : 2.6.39-300.17.1.el5uek
      version                : #1 SMP Fri Oct 19 11:30:08 PDT 2012
      machine                : x86_64
      nr_cpus                : 160
      nr_nodes               : 8
      cores_per_socket       : 10
      threads_per_core       : 2
      cpu_mhz                : 2394
      hw_caps                :
bfebfbff:2c100800:00000000:00003f40:02bee3ff:00000000:00000001:00000000
      virt_caps              : hvm hvm_directio
      total_memory           : 2097142
      free_memory            : 2040108
      free_cpus              : 0
      xen_major              : 4
      xen_minor              : 1
      xen_extra              : .3OVM
      xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
      xen_scheduler          : credit
      xen_pagesize           : 4096
      platform_params        : virt_start=0xffff800000000000
      xen_changeset          : unavailable
      xen_commandline        : dom0_mem=31390M no-bootscrub
      cc_compiler            : gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)
      cc_compile_by          : mockbuild
      cc_compile_domain      : us.oracle.com
      cc_compile_date        : Fri Oct 19 21:34:08 PDT 2012
      xend_config_format     : 4

      # uname -a
      Linux ovs-3f-9e-04 2.6.39-300.17.1.el5uek #1 SMP Fri Oct 19 11:30:08 PDT
2012 x86_64 x86_64 x86_64 GNU/Linux

      # cat /boot/grub/grub.conf
      ...
      kernel /xen.gz dom0_mem=31390M no-bootscrub dom0_vcpus_pin 
dom0_max_vcpus=32

Test environment new: old env + cs 26056

Test script: test-vm-memory-allocation.sh (attached)

My conclusion from the test:

  - HVM create time is greatly reduced.
  - PVM create time is increased dramatically for 4G, 8G, 16G, 32G, 64G, 128G.
  - HVM/PVM destroy time is not affected.
  - If most of our customers are using PVM, I think this patch is bad: because
most VM memory should under 128G.
  - If they are using HVM, then this patch is great.

Questions for discussion:

  - Did you get the same result?
  - It seems this result is not ideal. We may need to improve it.

Please note: Imay not have access to the same machine for awhile.

Thanks,

Zhigang

Attachment: result.pdf
Description: Adobe PDF document

Attachment: result.csv
Description: Text Data

Attachment: test-vm-memory-allocation.sh
Description: application/shellscript

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.