>>> On 3/16/2010 at 5:24 PM, in message
>>> <201003162224.02105.joost@xxxxxxxxxxxx>,
"J. Roeleveld" <joost@xxxxxxxxxxxx> wrote:
> On Tuesday 16 March 2010 03:50:18 Ky Srinivasan wrote:
>> >>> On 3/14/2010 at 9:49 AM, in message
>>
>> <f4527be1003140649p6d9cced6u7d1fde07897ae70c@xxxxxxxxxxxxxx>, Andrew Lyon
>>
>> <andrew.lyon@xxxxxxxxx> wrote:
>> > On Fri, Mar 12, 2010 at 10:41 AM, J. Roeleveld <joost@xxxxxxxxxxxx> wrote:
>> >> On Tuesday 09 March 2010 20:56:11 Ky Srinivasan wrote:
>> >>> The attached patch supports dynamic resizing of vbds.
>> >>>
>> >>> Signed-off-by: K. Y. Srinivasan <ksrinivasan@xxxxxxxxxx>
>> >>
>> >> Thank you for this.
>> >>
>> >> The patch applied succesfully against the gentoo-xen kernel
>> >> (2.6.29-xen-r4)
>> >>
>> >> I will test the patch on my system during the next week and provide
>> >
>> > feedback.
>>
>> Thanks. Looking forward to your feedback.
>>
>> K. Y
>
> Ok, finally got time to test it.
> Not seen any major crashes, but my domU and filesystem did end up in an
> unusable state.
>
> I also noticed that the change-entries in the logs didn't show up until I
> "touched" the drive.
> Eg: "ls <mount point>"
This is by design. The change made to the device on the host will be propagated
to the device in the guest in the context of the dedicated thread in the host
that services I/O requests for the device under question. If there is any
activity on the device the size information (if it has changed) will be
propagated to the guest.
>
> When trying to do an online resize, "resize2fs" refused, saying the
> filesystem
> was already using the full space:
> --
> storage ~ # resize2fs /dev/sdb1
> resize2fs 1.41.9 (22-Aug-2009)
> The filesystem is already 104857600 blocks long. Nothing to do!
> --
>
> This was then 'resolved' by umount/mount of the filesystem:
> --
> storage ~ # umount /data/homes/
>
>
>
> storage ~ # mount /data/homes/
>
>
>
> storage ~ # resize2fs /dev/sdb1
> resize2fs 1.41.9 (22-Aug-2009)
> Filesystem at /dev/sdb1 is mounted on /data/homes; on-line resizing required
> old desc_blocks = 25, new_desc_blocks = 29
> Performing an on-line resize of /dev/sdb1 to 117964800 (4k) blocks.
> --
>
> These actions were take in the domU.
>
> The patch informs the domU about the new size, but the new size is not
> cascaded to all the levels.
You are right. This patch only sets the capacity of the device in the guest to
correctly track the capacity of the corresponding device on the host side and
nothing more. So in your example, I suspect if on the host side you had mounted
the LVM device, and subsequently resized the device, you would see a similar
behavior as you saw here (with the patch) - you may have to umount/mount to see
the changes in size and this patch is not going to address this.
Even with this restriction, I would submit it is a step in the right direction.
In environments where dom0 and domu administrators are different, this patch
significantly simplifies the co-ordination required to dynamically provision
storage - the actions to be performed on dom0 side are independent of the
actions to be performed on the domu side, and domu side may choose to do what
needs to be done on the domu side when it is most convenient for the domu side.
Furthermore, applications that do not cache the metadata or applications that
can be forced to re-read the metadata without having to umount/mount the
device, we can have a truly dynamic environment.
>
> I'm not familiar enough with the kernel internals to point to where the
> missing part is.
>
> My ideal situation would allow the folliowing to work without additional
> steps:
>
> dom0: lvresize -L+10G /dev/vg/foo
> domU: resizefs /dev/sdb1
>
> (with "/dev/vg/foo" exported to domU as "/dev/sdb1")
If this worked on the native physical hardware, then I would think it would
work here as well.
>
> Right now, I need to do the following:
> dom0: lvresize -L+10G /dev/vg/foo
> domU: ls /mnt/sdb1
> domU: umount /mnt/sdb1
> domU: mount /mnt/sdb1
> domU: resizefs /dev/sdb1
>
This is restriction imposed by the FS.
> During the 2nd attempt, when trying to umount the filesystem after
> increasing
> it again leads to the domU having a 100% I/O wait.
Could you give the exact steps to reproduce this problem.
> The logs themselves do not, however, show any usefull information.
>
> I waited for about 30 minutes and saw no change to this situation.
>
> I am afraid that for now I will revert back to not having this patch applied
>
> and use the 'current' method of increasing the filesystem sizes.
Thank you for taking the time to test. Hopefully, I will be able to fix the
problem you are seeing.
Regards,
K. Y
>
> Please let me know if there is any further testing I can help with.
>
> --
> Joost Roeleveld
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|