[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [PATCH] EPT: Only sync pcpus on which a domain's vcpus might be running



On Fri, Sep 18, 2009 at 2:21 PM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx> wrote:
> How bad is it is you don't flush at all? 40s is still not great and maybe we
> could batch the INVEPT invocations? That might also greatly reduce the cost
> of the current naïve flush-all implementation, to a point where it is
> actually acceptable.

In theory, the memory handed back by the balloon driver shouldn't be
touched by the OS.  I think it would be OK if guess accesses to that
gfn space didn't fail for the guest giving up the pages; however, we
can't give the memory to another guest until we know for sure that the
first guest can't possibly access it anymore.  I think we should be
able to modify the balloon driver to "batch" some number of updates;
say, 1024 at a time.  Paul, any thoughts on this?

The other thing is the level of indirection -- we have to add a
parameter to set_p2m_entry() that says, "Don't sync this right away",
and then add another function that says, "Now commit all the changes I
just made".  That may take some thought to do correctly.

I think avoiding flush-all is still a good idea, as we have other
things like populate-on-demand and zero-page sweep doing lots of
modifications of the p2m table during boot that can't be batched this
way.  For example, the zero sweep *must* remove the page from the p2m
before doing a final scan to make sure it's still zero.  I suppose we
could scan a list of pages, remove them all from the p2m (taking the
flush-all), and then scan them again... but it seems like now we're
starting to get a lot more complicated than just keeping a mask or two
around.

Thoughts?

I think starting with a "flush-on-switch-to" would be good; it should
be fairly straightforward to make that flush happen only when:
* the domain has run on that cpu before, and
* the domain has had p2m changes since the last time it ran

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.