|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] RE: Biweekly VMX status report. Xen: #19739 & Xen0: #898
One more new P1 bug here:
3. Linux guest boots up very slow with SDL rendering
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1478
Li, Haicheng wrote:
> Hi all,
>
> This is our test report for Xen-unstable tree. 2 new P1 bugs were
> found, 3 old bugs got fixed, and 7 old bugs were demoted to P2/P3.
> Due to bug #1476, the test was conducted with raw guest images.
>
> New Bugs (2):
> =====================================================================
> 1. Cannot boot up guest with Qcow image.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1476
> 2. If set sdl=1, it's failed to create guest with VT-d device
> assigned.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1477
>
> Fixed Bugs (3):
> =====================================================================
> 1. Can not create qcow file with command qemu-img-xen
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1452
> 2. Linux guest panics if it has duplicated BDFs assigned through VT-d.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1460
> 3. PCI configuration space header is corrupted after device
> pass-through.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1457
>
> Old P1 Bugs (4):
> =====================================================================
> 1. stubdom based guest hangs when assigning hdc to it.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1373
> 2. [stubdom]The xm save command hangs while saving <Domain-dm>.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1377
> 3. [stubdom] cannot restore stubdom based domain.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1378
> 4. stubdom based guest hangs at starting when using qcow image.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1372
>
> P2 & P3 Bugs (7):
> =====================================================================
> 1. Fc6/win-XP guest can not reboot right after booting up.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1456
> 2. E1000e NIC failed in FC10 guest with MSI.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1458
> 3. T state control failed.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1451
> 4. Two onboard 82576 NICs assigned to HVM guest cannot work stable if
> use INTx
> interrupt.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1459
> 5. [VT-d] failed to reassign some PCI-e NICs.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1379
> 6. Failed to install FC10.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1461
> 7. [ACPI] On some x86_64 platform, `echo mem > /sys/power/state` can
> not
> trigger Dom0 S3 until using Ctrl^C to terminate the command.
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1453
>
> Xen Info:
> ============================================================================
> xen-changeset: 19739:4448fae52553
> dom0-changeset: 898:ca12928cdafe
>
> ioemu git:
> commit c9c1a645fcfdba8c4a15a56e29d5ea7b7bcd7aa6
> Author: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> Date: Wed Jun 3 15:47:52 2009 +0100
>
> Test Environment:
> ==========================================================================
> Service OS : Red Hat Enterprise Linux Server release 5.1 (Tikanga)
> Hardware : Nehalem-HEDT
>
> Summary Test Report of Last Session
> =====================================================================
> Total Pass Fail NoResult Crash
> =====================================================================
> vtd_ept_vpid 16 16 0 0 0
> ras_ept_vpid 1 1 0 0 0
> control_panel_ept_vpid 10 10 0 0 0
> stubdom_ept_vpid 1 0 1 0 0
> gtest_ept_vpid 16 16 0 0 0
> acpi_ept_vpid 5 4 1 0 0
> device_model_ept_vpid 2 2 0 0 0
> =====================================================================
> vtd_ept_vpid 16 16 0 0 0
> :two_dev_up_xp_nomsi_64_ 1 1 0 0 0
> :two_dev_smp_nomsi_64_g3 1 1 0 0 0
> :two_dev_scp_64_g32e 1 1 0 0 0
> :lm_pcie_smp_64_g32e 1 1 0 0 0
> :lm_pcie_up_64_g32e 1 1 0 0 0
> :two_dev_up_64_g32e 1 1 0 0 0
> :lm_pcie_up_xp_nomsi_64_ 1 1 0 0 0
> :two_dev_up_nomsi_64_g32 1 1 0 0 0
> :two_dev_smp_64_g32e 1 1 0 0 0
> :lm_pci_up_xp_nomsi_64_g 1 1 0 0 0
> :lm_pci_up_nomsi_64_g32e 1 1 0 0 0
> :two_dev_smp_xp_nomsi_64 1 1 0 0 0
> :two_dev_scp_nomsi_64_g3 1 1 0 0 0
> :lm_pcie_smp_xp_nomsi_64 1 1 0 0 0
> :lm_pci_smp_nomsi_64_g32 1 1 0 0 0
> :lm_pci_smp_xp_nomsi_64_ 1 1 0 0 0
> ras_ept_vpid 1 1 0 0 0
> :cpu_online_offline_64_g 1 1 0 0 0
> control_panel_ept_vpid 10 10 0 0 0
> :XEN_1500M_guest_64_g32e 1 1 0 0 0
> :XEN_256M_guest_64_gPAE 1 1 0 0 0
> :XEN_256M_xenu_64_gPAE 1 1 0 0 0
> :XEN_Nevada_xenu_64_g32e 1 1 0 0 0
> :XEN_vmx_vcpu_pin_64_g32 1 1 0 0 0
> :XEN_linux_win_64_g32e 1 1 0 0 0
> :XEN_vmx_2vcpu_64_g32e 1 1 0 0 0
> :XEN_1500M_guest_64_gPAE 1 1 0 0 0
> :XEN_two_winxp_64_g32e 1 1 0 0 0
> :XEN_256M_guest_64_g32e 1 1 0 0 0
> stubdom_ept_vpid 1 0 1 0 0
> :boot_stubdom_no_qcow_64 1 0 1 0 0
> gtest_ept_vpid 16 16 0 0 0
> :boot_up_noacpi_win2k_64 1 1 0 0 0
> :reboot_xp_64_g32e 1 1 0 0 0
> :boot_solaris10u5_64_g32 1 1 0 0 0
> :boot_indiana_64_g32e 1 1 0 0 0
> :boot_smp_acpi_xp_64_g32 1 1 0 0 0
> :boot_up_acpi_64_g32e 1 1 0 0 0
> :boot_base_kernel_64_g32 1 1 0 0 0
> :kb_nightly_64_g32e 1 1 0 0 0
> :boot_nevada_64_g32e 1 1 0 0 0
> :boot_fc9_64_g32e 1 1 0 0 0
> :boot_smp_vista_64_g32e 1 1 0 0 0
> :boot_smp_win2008_64_g32 1 1 0 0 0
> :boot_smp_acpi_win2k3_64 1 1 0 0 0
> :boot_rhel5u1_64_g32e 1 1 0 0 0
> :boot_smp_acpi_win2k_64_ 1 1 0 0 0
> :reboot_fc6_64_g32e 1 1 0 0 0
> acpi_ept_vpid 5 4 1 0 0
> :monitor_c_status_64_g32 1 1 0 0 0
> :check_t_control_64_g32e 1 0 1 0 0
> :hvm_s3_smp_64_g32e 1 1 0 0 0
> :Dom0_S3_64_g32e 1 1 0 0 0
> :monitor_p_status_64_g32 1 1 0 0 0
> device_model_ept_vpid 2 2 0 0 0
> :pv_on_up_64_g32e 1 1 0 0 0
> :pv_on_smp_64_g32e 1 1 0 0 0
> =====================================================================
> Total 51 49 2 0 0
>
>
> -haicheng
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
-haicheng
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|