WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Daily Xen Builds

To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Daily Xen Builds
From: David F Barrera <dfbp@xxxxxxxxxx>
Date: Wed, 12 Jul 2006 13:02:56 -0500
Delivery-date: Wed, 12 Jul 2006 11:05:45 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.4 (Windows/20060516)
July 12, 2006, using xen-unstable changeset:

changeset:   10650:a1c2cede77c7
tag:         tip
user:        kfraser@xxxxxxxxxxxxxxxxxxxxx
date:        Mon Jul 10 09:01:49 2006 +0100
summary:     [HVM] Fix "Many lost ticks" warning in ia32e guest

x86_32 (SLES 9 on IBM xSeries 235, 335 and IBM HS20 Blades 8843 41U)

* Builds and boots without problems
* Ran xm-test on all boxes

ISSUES:

x86_64 (SLES 9 IBM HS20 Blades 8843 41U)

* Builds and boots without problems
* Ran xm-test on all boxes

ISSUES:

XM-TEST Results:
           Platform | PASS | FAIL | XPASS | XFAIL |
---------------------+------+------+-------+-------+
  hs20.sles9-x86_64 |  106 |    6 |     0 |     3 |
          x235sles9 |  106 |    5 |     0 |     3 |
          x335sles9 |  107 |    5 |     0 |     3 |


--

Regards,

David F Barrera
Linux Technology Center
Systems and Technology Group, IBM

"The wisest men follow their own direction. "
        
                         Euripides

Xm-test execution summary:
  PASS:  106
  FAIL:  6
  XPASS: 0
  XFAIL: 3


Details:

 FAIL: 02_info_compiledata_pos 
        Unknown reason

XFAIL: 02_network_local_ping_pos 
         ping loopback failed for size 65507. ping eth0 failed for size 65507.

XFAIL: 05_network_dom0_ping_pos 
         Ping to dom0 failed for size 65507.

XFAIL: 11_network_domU_ping_pos 
         Ping failed for size 1 48 64 512 1440 1500 1505 4096 4192 32767 65507.

 FAIL: 12_network_domU_tcp_pos 
         TCP hping2 failed for size 16384 24567 32767 65495.

 FAIL: 13_network_domU_udp_pos 
         UDP hping2 failed for size 32767 65495.

Xm-test execution summary:
  PASS:  106
  FAIL:  5
  XPASS: 0
  XFAIL: 3


Details:

XFAIL: 02_network_local_ping_pos 
         ping loopback failed for size 65507. ping eth0 failed for size 65507.

XFAIL: 05_network_dom0_ping_pos 
         Ping to dom0 failed for size 512 32767 65507.

XFAIL: 11_network_domU_ping_pos 
         Ping failed for size 1 48 64 512 1440 1500 1505 4096 4192 32767 65507.

 FAIL: 12_network_domU_tcp_pos 
         TCP hping2 failed for size 1 48 64 1500 1505 16384 24567 32767 65495.

 FAIL: 13_network_domU_udp_pos 
         UDP hping2 failed for size 32767 65495.

Xm-test execution summary:
  PASS:  107
  FAIL:  5
  XPASS: 0
  XFAIL: 3


Details:

XFAIL: 02_network_local_ping_pos 
         ping loopback failed for size 65507. ping eth0 failed for size 65507.

XFAIL: 05_network_dom0_ping_pos 
         Ping to dom0 failed for size 65507.

XFAIL: 11_network_domU_ping_pos 
         Ping failed for size 1 48 64 512 1440 1500 1505 4096 4192 32767 65507.

 FAIL: 12_network_domU_tcp_pos 
         TCP hping2 failed for size 16384 24567 32767 65495.

 FAIL: 13_network_domU_udp_pos 
         UDP hping2 failed for size 32767 65495.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>