[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Interaction between Xen and XFS: stray RW mappings



Nick Piggin wrote:
> Yeah, it would be possible. The easiest way would just be to shoot down
> all lazy vmaps (because you're doing the global IPIs anyway, which are
> the expensive thing, at which point you may as well purge the rest of
> your lazy mappings).
>   

Sure.

> If it is sufficiently rare, then it could be the simplest thing to do.
>   

Yes.  If there's some way to tell whether a particular page is in a lazy
mapping then that would help, since we could easily tell whether we need
to do the whole shootdown thing.  I would expect the population of
lazily mapped pages in the whole freepage pool to be pretty small, but
if the allocator tends to return the most recently freed pages you might
hit them fairly regularly (shoving them at the other end of the freelist
might be useful).

> OK, I see. Because even though it is technically safe where we are
> using it (because nothing writes through the mappings after the page
> is freed), a corrupted guest could use the same window to do bad
> things with the pagetables?
>   

That's right.  The hypervisor doesn't trust the guests, so it prevents
them from getting into a state where they can do bad things.

> For Xen -- shouldn't be a big deal. We can have a single Linux mm API
> to call, and we can do the right thing WRT vmap/kamp. I should try to
> merge my current lazy vmap patches which replace the XFS stuff, so we
> can implement such an API and fix your XFS issue?

Sounds good.

>  That's not going to
> happen for at least a cycle or two though, so in the meantime maybe
> an ifdef for that XFS vmap batching code would help?
>   

For now I've proposed a patch to simply eagerly vunmap everything when
CONFIG_XEN is set.  It certainly works, but I don't have a good feel for
how much of a performance hit that imposes on XFS.  A slightly more
subtle change would be to test to see if we're actually running under
Xen before taking the eager path, so at least the performance burden
only affects actual Xen users (and I presume xfs+xen is a fairly rare
combination, or this problem would have turned up earlier, or perhaps
the old xenified kernels have some other workaround for it).

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.