|
|
|
|
|
|
|
|
|
|
xen-users
[Xen-users] Live migration downtime longer when mem < maxmem
Hi there,
I am experiencing a weird (at least to me) behavior in live migration.
I have 2 VMs, one with 1024M maxmem and another with 512M maxmem. When i set the VM mem to max-mem, I get around 3s of downtime. When I reduce the mem to half the max-mem, I get around 20s downtime! I tested with both VMs.
Just a note: both VMs and phys. machines are idle, the network isn't dedicated, but the tests were run consecutively and repeated times. I'm also working on figuring out why the downtime is longer than a few ms, as described in Xen papers. I think it has to do with the switch...
But in the meantime, why on earth should a VM's migration take longer when I reduce it's mem?
I appreciate any ideas, theories, or even a trivial explanation that would make me look silly :)
Thanks.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|