[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH OSSTEST v8 13/14] Add testing of file backed disk formats



xen-create-image makes this tricky to do since it is rather LVM
centric. Now that we have the ability to install from d-i it's
possible to arrange fairly easily that they use something other than
a phy backend over a bare LVM device.

Here we add support to the test script and infra and create a bunch of
new jobs testing the cross product of {xl,libvirt} x {raw,qcow2,vhd}.

A disk format of "raw" means a raw backing file, where as "none" (the
default) means to continue to use the base LVM device.

The test scripts are modified such that when constructing a domain
with a diskfmt runvar specifeies a file backed disk format (i.e. not
"none"):

 - the LVM device is slightly enlarged to account for file format
   headers (1M should be plenty).
 - the LVM device will have an ext3 filesystem created on it instead
   of being used as a phy device for the guest. Reusing the LVM volume
   in this way means we don't need to do more storage management in
   dom0 (i.e. arranging for / to be large enough, or managing a
   special "images" LV)
 - the relevant type of container is created within the filesystem
   using the appropriate tool.
 - New properties Disk{fmt,spec} are added to all $gho, containing
   the format used for the root disk and the xl diskspec to load it.
     - lvm backed guests use a xend/xm compatible spec, everything
       else uses the improved xl syntax which libvirt also supports.
       We won't test non-LVM on xend.
 - New properties Disk{mnt,img} are added to $gho which are not using
   LVM. These contain the mount point to use (configurable via
   OSSTEST_CONFIG and runvars) and the full path (including mount
   point) to the image itself.
 - When starting or stopping a guest we arrange for the filesystem to
   be (u)mounted.
     - The prepearation when starting a guest copes gracefully with
       the disk already being prepared.
     - Hooks are called from guest_create() and guest_destroy() to
       manipulate the disk as needed.

Using standalong-generate-dump-flight-runvars a representative set of
runvars is:
+test-amd64-amd64-xl-qcow2                     all_hostflags               
arch-amd64,arch-xen-amd64,suite-wheezy,purpose-test
+test-amd64-amd64-xl-qcow2                     arch                        amd64
+test-amd64-amd64-xl-qcow2                     buildjob                    
build-amd64
+test-amd64-amd64-xl-qcow2                     debian_arch                 amd64
+test-amd64-amd64-xl-qcow2                     debian_bootloader           
pygrub
+test-amd64-amd64-xl-qcow2                     debian_diskfmt              qcow2
+test-amd64-amd64-xl-qcow2                     debian_kernkind             pvops
+test-amd64-amd64-xl-qcow2                     debian_method               
netboot
+test-amd64-amd64-xl-qcow2                     debian_suite                
wheezy
+test-amd64-amd64-xl-qcow2                     kernbuildjob                
build-amd64-pvops
+test-amd64-amd64-xl-qcow2                     kernkind                    pvops
+test-amd64-amd64-xl-qcow2                     toolstack                   xl
+test-amd64-amd64-xl-qcow2                     xenbuildjob                 
build-amd64

Compared to test-amd64-amd64-pygrub (which is the most similar job) and
normalising the test name the difference is:
 test-amd64-amd64-SUFFIX                       all_hostflags               
arch-amd64,arch-xen-amd64,suite-wheezy,purpose-test
 test-amd64-amd64-SUFFIX                       arch                        amd64
 test-amd64-amd64-SUFFIX                       buildjob                    
build-amd64
 test-amd64-amd64-SUFFIX                       debian_arch                 amd64
 test-amd64-amd64-SUFFIX                       debian_bootloader           
pygrub
+test-amd64-amd64-SUFFIX                       debian_diskfmt              qcow2
+test-amd64-amd64-SUFFIX                       debian_kernkind             pvops
 test-amd64-amd64-SUFFIX                       debian_method               
netboot
 test-amd64-amd64-SUFFIX                       debian_suite                
wheezy
 test-amd64-amd64-SUFFIX                       kernbuildjob                
build-amd64-pvops
 test-amd64-amd64-SUFFIX                       kernkind                    pvops
 test-amd64-amd64-SUFFIX                       toolstack                   xl
 test-amd64-amd64-SUFFIX                       xenbuildjob                 
build-amd64

Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
---
v8: Default diskfmt is "none" (was "lvm"), i.e. use the LVM device
    directly. Reword the commit log to reflect this.

v7: Use the right arch for tests, not always amd64 (doesn't work well
    on arm!)
    Defer guest_find_diskimg until _vg runvar and thence Lvdev are
    setup:
        selectguest calls guest_find_lv then guest_find_diskimg, using
        preexisting runvars.

        But prepare_guest calls selectguest before setting disk_lv, so
        Lvdev ends up undefined, after setting disk_lv prepare_guest
        calls guest_find_lv+guest_find_diskimg again and things get
        configured.

        This follows how guest_find_lv only sets Lvdev iff Vg and Lv
        are both set.
    Use {Guest}_suite not {Guest}_dist as runvar to choose version.
    Assume slower dd for raw population, since I was still seeing
    timeouts, assume at worst 1/2 the speed I happened to see in a
    local test
    Refresh the runvars in the commit log and drop the list of flights.
v6: Use bs=1MB (=1*1000) when creating the raw images instead of bs=1M
    (=1*1024), this matches the units used by lvcreate's -L option and
    therefore arranges that the imaage actually fits.
v5: Assume 100MB/s dd from /dev/zero when creating a raw disk image
    Allow 10M of slack on filesystem for raw, qcow and vhd. 1M wasn't
    enough in practice for raw.
v4: new patch
---
 Osstest/TestSupport.pm | 100 ++++++++++++++++++++++++++++++++++++++++++++++++-
 make-flight            |  16 ++++++++
 ts-debian-di-install   |  10 ++---
 ts-guest-start         |   1 -
 4 files changed, 117 insertions(+), 10 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 3a7a535..3e09e8a 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -91,7 +91,8 @@ BEGIN {
                       target_var target_var_prefix
                       selectguest prepareguest more_prepareguest_hvm
                       guest_var guest_var_commalist guest_var_boolean
-                      prepareguest_part_lvmdisk prepareguest_part_xencfg
+                      prepareguest_part_lvmdisk prepareguest_part_diskimg
+                      prepareguest_part_xencfg
                       guest_umount_lv guest_await guest_await_dhcp_tcp
                       guest_checkrunning guest_check_ip guest_find_ether
                       guest_find_domid guest_check_up guest_check_up_quick
@@ -99,6 +100,7 @@ BEGIN {
                       guest_await_shutdown guest_await_destroy guest_destroy
                       guest_vncsnapshot_begin guest_vncsnapshot_stash
                      guest_check_remus_ok guest_editconfig
+                      guest_prepare_disk guest_unprepare_disk
                       host_involves_pcipassthrough host_get_pcipassthrough_devs
                       toolstack guest_create
 
@@ -1344,6 +1346,7 @@ sub selectguest ($$) {
     }
     logm("guest: using $gn on $gho->{Host}{Name}");
     guest_find_lv($gho);
+    guest_find_diskimg($gho);
     guest_find_ether($gho);
     guest_find_tcpcheckport($gho);
     dhcp_watch_setup($ho,$gho);
@@ -1359,6 +1362,25 @@ sub guest_find_lv ($) {
         ? '/dev/'.$gho->{Vg}.'/'.$gho->{Lv} : undef;
 }
 
+sub guest_find_diskimg($)
+{
+    my ($gho) = @_;
+
+    return unless $gho->{Lvdev};
+
+    $gho->{Diskfmt} = $r{"$gho->{Guest}_diskfmt"} // "none";
+    $gho->{Diskspec} = "phy:$gho->{Lvdev},xvda,w";
+
+    return if $gho->{Diskfmt} eq "none";
+
+    my $mntroot = get_host_property($gho->{Host}, "DiskImageMount",
+                           $c{DiskImageMount} // "/var/lib/xen/images");
+
+    $gho->{Diskmnt} = "$mntroot/$gho->{Guest}";
+    $gho->{Diskimg} = "$gho->{Diskmnt}/disk.$gho->{Diskfmt}";
+    $gho->{Diskspec} = 
"format=$gho->{Diskfmt},vdev=xvda,target=$gho->{Diskimg}";
+}
+
 sub guest_find_ether ($) {
     my ($gho) = @_;
     $gho->{Ether}= $r{"$gho->{Guest}_ether"};
@@ -1398,6 +1420,7 @@ sub guest_destroy ($) {
     my ($gho) = @_;
     my $ho = $gho->{Host};
     toolstack($ho)->destroy($gho);
+    guest_unprepare_disk($gho);
 }
 
 sub guest_await_destroy ($$) {
@@ -1409,9 +1432,32 @@ sub guest_await_destroy ($$) {
 sub guest_create ($) {
     my ($gho) = @_;
     my $ho = $gho->{Host};
+    guest_prepare_disk($gho);
     toolstack($ho)->create($gho);
 }
 
+sub guest_prepare_disk ($) {
+    my ($gho) = @_;
+
+    guest_umount_lv($gho->{Host}, $gho);
+
+    return if $gho->{Diskfmt} eq "none";
+
+    target_cmd_root($gho->{Host}, <<END);
+mkdir -p $gho->{Diskmnt}
+mount $gho->{Lvdev} $gho->{Diskmnt};
+END
+}
+
+sub guest_unprepare_disk ($) {
+    my ($gho) = @_;
+    return if $gho->{Diskfmt} eq "none";
+    target_cmd_root($gho->{Host}, <<END);
+umount $gho->{Lvdev} || :
+END
+}
+
+
 
 sub target_choose_vg ($$) {
     my ($ho, $mbneeded) = @_;
@@ -1550,6 +1596,7 @@ sub prepareguest ($$$$$$) {
     }
 
     guest_find_lv($gho);
+    guest_find_diskimg($gho);
     guest_find_ether($gho);
     guest_find_tcpcheckport($gho);
     return $gho;
@@ -1560,7 +1607,56 @@ sub prepareguest_part_lvmdisk ($$$) {
     target_cmd_root($ho, "lvremove -f $gho->{Lvdev} ||:");
     target_cmd_root($ho, "lvcreate -L ${disk_mb}M -n $gho->{Lv} $gho->{Vg}");
     target_cmd_root($ho, "dd if=/dev/zero of=$gho->{Lvdev} count=10");
-}    
+}
+
+sub make_vhd ($$$) {
+    my ($ho, $gho, $disk_mb) = @_;
+    target_cmd_root($ho, "vhd-util create -n $gho->{Rootimg} -s $disk_mb");
+}
+sub make_qcow2 ($$$) {
+    my ($ho, $gho, $disk_mb) = @_;
+    # upstream qemu's version. Seems preferable to qemu-xen-img from qemu-trad.
+    my $qemu_img = "/usr/local/lib/xen/bin/qemu-img";
+    target_cmd_root($ho, "$qemu_img create -f qcow2 $gho->{Rootimg} 
${disk_mb}M");
+}
+sub make_raw ($$$) {
+    my ($ho, $gho, $disk_mb) = @_;
+    # In local tests this reported 130MB/s, so calculate a timeout assuming 
65MB/s.
+    target_cmd_root($ho, "dd if=/dev/zero of=$gho->{Rootimg} bs=1MB 
count=${disk_mb}",
+       ${disk_mb} / 65);
+}
+
+sub prepareguest_part_diskimg ($$$) {
+    my ($ho, $gho, $disk_mb) = @_;
+
+    my $diskfmt = $gho->{Diskfmt};
+    # Allow an extra 10 megabytes for image format headers
+    my $disk_overhead = $diskfmt eq "none" ? 0 : 10;
+
+    logm("preparing guest disks in $diskfmt format");
+
+    target_cmd_root($ho, "umount $gho->{Lvdev} ||:");
+
+    prepareguest_part_lvmdisk($ho, $gho, $disk_mb + $disk_overhead);
+
+    if ($diskfmt ne "none") {
+
+       $gho->{Rootimg} = "$gho->{Diskmnt}/disk.$diskfmt";
+       $gho->{Rootcfg} = "format=$diskfmt,vdev=xvda,target=$gho->{Rootimg}";
+
+       target_cmd_root($ho, <<END);
+mkfs.ext3 $gho->{Lvdev}
+mkdir -p $gho->{Diskmnt}
+mount $gho->{Lvdev} $gho->{Diskmnt}
+END
+        no strict qw(refs);
+        &{"make_$diskfmt"}($ho, $gho, $disk_mb);
+
+       target_cmd_root($ho, <<END);
+umount $gho->{Lvdev}
+END
+    }
+}
 
 sub prepareguest_part_xencfg ($$$$$) {
     my ($ho, $gho, $ram_mb, $xopts, $cfgrest) = @_;
diff --git a/make-flight b/make-flight
index 2a132df..a8ed20e 100755
--- a/make-flight
+++ b/make-flight
@@ -395,6 +395,22 @@ do_pv_debian_tests () {
   for xsm in $xsms ; do
     do_pv_debian_test_one libvirt '' libvirt enable_xsm=$xsm
   done
+
+  for ts in xl libvirt ; do
+
+    for fmt in raw vhd qcow2 ; do
+
+      fmt_runvar="debian_diskfmt=$fmt"
+
+      do_pv_debian_test_one $ts-$fmt '-di' $ts  \
+          debian_arch=$dom0arch                 \
+          debian_suite=$guestsuite              \
+          debian_method=netboot                 \
+          debian_bootloader=pygrub              \
+          $fmt_runvar
+
+    done
+  done
 }
 
 test_matrix_do_one () {
diff --git a/ts-debian-di-install b/ts-debian-di-install
index 6fafd6d..34b8e1e 100755
--- a/ts-debian-di-install
+++ b/ts-debian-di-install
@@ -75,9 +75,7 @@ sub prep () {
     $gho= prepareguest($ho, $gn, $guesthost, 22,
                        $disk_mb, 40);
 
-    prepareguest_part_lvmdisk($ho, $gho, $disk_mb);
-
-    target_cmd_root($ho, "umount $gho->{Lvdev} ||:");
+    prepareguest_part_diskimg($ho, $gho, $disk_mb);
 }
 
 sub setup_netinst($$)
@@ -227,14 +225,12 @@ END
        OnPowerOff => "preserve"
     );
 
-    my $root_disk = "'phy:$gho->{Lvdev},xvda,w'";
-
     prepareguest_part_xencfg($ho, $gho, $ram_mb, \%install_xopts, <<END);
 $method_cfg
 extra       = "$cmdline"
 #
 disk        = [
-            $extra_disk $root_disk
+            $extra_disk '$gho->{Diskspec}'
             ]
 END
 
@@ -258,7 +254,7 @@ END
 $blcfg
 #
 disk        = [
-            $root_disk
+            '$gho->{Diskspec}'
             ]
 END
     return;
diff --git a/ts-guest-start b/ts-guest-start
index 1aa9e69..a434720 100755
--- a/ts-guest-start
+++ b/ts-guest-start
@@ -25,7 +25,6 @@ tsreadconfig();
 our ($ho,$gho) = ts_get_host_guest(@ARGV);
 
 sub start () {
-    guest_umount_lv($ho, $gho);
     guest_create($gho);
 }
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.