[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 5/6] ts-xen-build-prep: mkfs a new /home/osstest, don't resize2fs



Online resize is 40x slower than mkfs.  It appears that the
backgrounded resize2fs can starve build tasks of IO bandwidth.

So instead, use mkfs to make a new filesystem for /home/osstest.
We use rsync to copy in the old contents.

For convenience of (a) review (b) possible reversion, we keep (for
now) the lvextend machinery.  So we create a new 1-extent LV for the
lvextend machinery to work on.

But we don't call resize2fs when we extend it, because now it doesn't
have a fs on it yet.  We make the filesystem later.

We move the ccache_setup until after this is done because it's a bit
pointless to put things in the to-be-removed /home/osstest when they
could be put in the new one after it had been set up.

We take slight care to make the rune slightly idempotent: if it
completed successfully we detect this and do not run it again.  But if
it didn't, things may be messed up and running it again is unlikely to
help and may make things worse.

I have tested this on rice-weevil and the whole new target command
(including rsync, mkfs, mount etc.) takes 126s.

Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
---
 ts-xen-build-prep |   46 ++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 42 insertions(+), 4 deletions(-)

diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index d600285..ab1346d 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -30,7 +30,7 @@ exit 0 if $ho->{SharedReady};
 
 our ($vg,$lv);
 
-our $lvleaf = 'root';
+our $lvleaf = 'osstest_home';
 our $pe_size;
 our $extended_pes = 0;
 
@@ -83,6 +83,11 @@ sub vginfo () {
     return @vginfo;
 }
 
+sub lvcreate () {
+    target_cmd_output_root($ho,
+                          "lvdisplay $lv || lvcreate -l 1 -n $lvleaf $vg");
+}
+
 sub lvextend1 ($$$) {
     my ($what, $max_more_gb)  = @_;
 
@@ -157,8 +162,39 @@ sub lvextend1 ($$$) {
 
     my $timeout = 2000 + int($pe_size * 0.000003 * $more_pe);
     logm("$what: ${pe_size}k x $more_pe (timeout=$timeout)");
-    my $cmd = "resize2fs $lv";
-    target_cmd_root($ho, $cmd, $timeout);
+}
+
+sub replace_home () {
+    my $dir = '/home/osstest';
+    my $mapper = lv_dev_mapper($vg,$lvleaf);
+    my ($fstype,@opts) = qw(ext3 -m 0 -O sparse_super);
+    target_cmd_root($ho, <<END, 1000);
+        set -ex
+       if mount | sed -e 's/^[^ ].* on //; s/ .*//' | grep -F '$dir'; then
+           exit 0
+       fi
+       mkfs -t $fstype @opts $lv
+        mount $lv /mnt
+       rsync -aHx --numeric-ids $dir/. /mnt/.
+       rm -rf $dir
+       mkdir -m 2700 $dir
+       echo '$mapper $dir $fstype defaults 0 0' >>/etc/fstab
+       umount /mnt
+       mount $dir
+END
+
+        # for convenience, here is a small scriptlet to undo this:
+        <<'END';
+#!/bin/sh
+set -ex
+cd /home
+rm -rf osstest.new
+rsync -aH --numeric-ids osstest/. osstest.new
+umount /home/osstest
+rmdir osstest
+mv osstest.new osstest
+lvremove -f /dev/`uname -n`/osstest_home
+END
 }
 
 sub prep () {
@@ -202,10 +238,12 @@ sub ccache_setup () {
 
 if (!$ho->{Flags}{'no-reinstall'}) {
     determine_vg_lv();
+    lvcreate();
     lvextend_stage1();
     prep();
-    ccache_setup();
     lvextend_stage2();
+    replace_home();
+    ccache_setup();
 }
 $mjobdb->jobdb_resource_shared_mark_ready
    ($ho->{Ident}, $ho->{Name}, "build-".$ho->{Suite}."-".$r{arch});
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.