[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] VM Migration on a NUMA server?

On Mon, Aug 3, 2015 at 6:10 PM Dario Faggioli <dario.faggioli@xxxxxxxxxx> wrote:
On Sat, 2015-08-01 at 06:21 +0000, Kun Cheng wrote:
> I've looked into those settings and vcpu affinities on Xen wiki page.
> However what I'm trying to argue about is memory should be migrated
> when vcpus are moved to another node.
I still struggle a bit to understand what you mean. It's exactly what I
said (or, at least, what I tried to do) that memory is *not* moved right
OK I get it, memory is not moved right now. What I said was discussing the necessity of doing that. As according to the locality theory (e.g. programs always prone to access the data that is recently accessed) and potential memory access delay (memory pages are on Node A but vcpus are moved to Node B), then I suppose moving memory pages along with the vcpus are necessary. If it can be achieved we should be able to improve the VM execution performance, I suppose.(But probably it will introduce overhead as moving memory is really really annoying...) .And If my words still confuses you, please feel free to tell me which part misleads you :)

There's no mean to do that, and putting one together is something really
difficult. There probably would be benefits of having it in place, but
only if it's implemented in the proper way, and applied with the proper
policing, which all should be thought, discussed and implemented.

What do you mean by saying "There's no mean to do that"? Did the underlying implementation (e.g. functions or other things) have no or incomplete support for moving memory pages (I seldom explore the mechanism which deals with memory)? Or did you mean it's too tricky & difficult to complete such a goal?Â
> But setting a vcpu's affinity only seems to allow vcpu migration
> between different nodes. As my NUMA server is on the way I cannot
> verify that.
Well, that is correct, memory is not moved. The local migration trick,
which I described in my previous email, actually works "only" because
migrating a guest basically means re-creating it from scratch (although,
of course, the "new" guest resumes from where the "old" was when
migration started), and hence re-allocate all it's memory, rather than
moving it.

Yes 'local migration' cannot be seen as an actual 'memory migration'. But from the VM user's view, their VMs get scheduled to another node and relevant memory also 'appear' there so that's a fake 'move', strictly speaking.

I'm also thinking about the plan B I mentioned. Basically that is following the VM migration procedures to learn how to move a page. I think both share something in common as moving a page is allocating a new one and copy old data to it (I'm still exploring how to let the vcpu access the new pages, remapping?), at the same time dirty pages etc. should be dealt. Am I correct?

> Anyhow, moving memory really draws my interest now.
Glad to hear that. Feel free to give it a try, and report here what you
find out. :-)

I'm evaluating theÂfeasibility now and I wish (if I'm going to do it) it (or phase 1, maybe it should be split into multiple multiple stages to reduce the difficulty & complexity) can be completed in 6-8 months.Â


<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.