WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Test results on Unisys ES7000 48x 160gb using xen-unstable c

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Test results on Unisys ES7000 48x 160gb using xen-unstable c/s 15730 - 4 old issues, 1 new issue
From: "Krysan, Susan" <KRYSANS@xxxxxxxxxx>
Date: Fri, 24 Aug 2007 17:27:31 -0500
Delivery-date: Fri, 24 Aug 2007 15:28:09 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <EF8D308BE33AF54D8934DF26520252D3069BD5BD@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <EF8D308BE33AF54D8934DF26520252D3069BD48F@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <FE7BBCFBB500984A9A7922EBC95F516E27D344@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <EF8D308BE33AF54D8934DF26520252D3069BD4F8@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <EF8D308BE33AF54D8934DF26520252D3069BD54A@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <EF8D308BE33AF54D8934DF26520252D3069BD57A@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <EF8D308BE33AF54D8934DF26520252D3069BD5BD@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcdJ4UYHnZd8QwLHTMSjo0sqxNkrmgE+CxDgAZa+3qABVnT68AFk5dvwAufQgQABArjgkAIyIiNQA7uSpdABgtM2EAIyrZhAAA2mOPADEjFQkAeOTDJwAbZ/6+AETqFeYAMnFGQQ
Thread-topic: Test results on Unisys ES7000 48x 160gb using xen-unstable c/s 15730 - 4 old issues, 1 new issue

Host:  Unisys ES7000/one, x86_64, 48 processors, 160 GB RAM

 

xen-unstable changeset 15730 compiled with max_phys_cpus=64 and booted with dom0_mem=512M numa=on

 

NEW ISSUE:

  • Bug #1049 - When boot the host with more than 166 GB RAM, the host only sees a maximum of 166GB

 

OLD ISSUES:

  • Bug #1050 - after c/s 15203, networking does not work unless a default gateway is specified
  • Bug #1051 - after c/s 15203, all xm-test tests fail with XmTestLib.NetConfig.NetworkError: Failed to show vif0.0 aliases: 65280 – workaround is to comment out lines in cleanDom0Aliases in xm-test/lib/XmTestLib/NetConfig.py until xm-test code is changed to accommodate the new bridge naming scheme
  • Bug #940 - Must specify hpet=disable kernel parameter to get 32-bit SLES10 domVTs to boot; narrowed down to c/s 14436 but the patch originator cannot recreate on his hardware
  • Bug #1037 - Shutdown of large domains takes a long time, during which time dom0 is not interruptible (due to the synchronous tearing down of the memory map)

 

Testing includes running xm-test and also attempting to boot and run programs in the following domUs and domVTs (running domains #s 3 through 9 simultaneously):

 

1.       32-processor 64-bit SLES10 domU with 156gb memory – run kernbench optimal load

2.       32-processor 64-bit SLES10 domVT with 150gb memory – run kernbench optimal load

3.       4-processor 64-bit SLES10 domU with 16gb memory - run kernbench optimal load

4.       4-procesor 32-bit SLES10 domVT with 2gb memory (booted with hpet=disable) - run kernbench optimal load

5.       4-processor 32-bit PAE SLES10 domVT with 16gb memory (booted with hpet=disable) - run kernbench optimal load

6.       4-processor 64-bit SLES10 domVT with 16gb memory – run kernbench optimal load

7.       1-processor Windows XP domVT with 4gb memory – run 100% cpu intensive program

8.       1-processor Windows 2003 Server domVT with 4gb memory – run 100% cpu intensive program

            9.   8-processor Windows 2003 Enterprise Edition domVT with 16gb memory – run 100% cpu intensive program

 

Results:

 

All domains ran successfully.

 

Ran xm-test (DomU) on dom0 with the following results:

 

(Could only run after commenting out the call to cleanDom0Aliases in xm-test/lib/XmTestLib/NetConfig.py and most of the failures are related to not finding device vif0.0 – this was not necessary before c/s 15203)

 

Xm-test timing summary:

  Run Started : Thu, 23 Aug 2007 16:13:35 -0400

  Run Stoped  : Thu, 23 Aug 2007 16:48:29 -0400

Xm-test execution summary:

  PASS:  102

  FAIL:  10

  XPASS: 0

  XFAIL: 3

 

Details:

 

 FAIL: 13_create_multinic_pos

        Unknown reason

 

XFAIL: 02_network_local_ping_pos

        Unknown reason

 

 FAIL: 03_network_local_tcp_pos

        Unknown reason

 

 FAIL: 04_network_local_udp_pos

        Unknown reason

 

XFAIL: 05_network_dom0_ping_pos

        Unknown reason

 

 FAIL: 06_network_dom0_tcp_pos

        Unknown reason

 

 FAIL: 07_network_dom0_udp_pos

        Unknown reason

 

XFAIL: 11_network_domU_ping_pos

        Unknown reason

 

 FAIL: 12_network_domU_tcp_pos

        Unknown reason

 

 FAIL: 13_network_domU_udp_pos

        Unknown reason

 

Thanks,

Sue Krysan

Linux Systems Group

Unisys Corporation

 

 

 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>