[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: fix paging_log_dirty_op to work with paging guests



On Mon, Dec 17, 2018 at 09:47:30AM -0700, Jan Beulich wrote:
> >>> On 17.12.18 at 16:42, <roger.pau@xxxxxxxxxx> wrote:
> > On Fri, Dec 14, 2018 at 04:52:00AM -0700, Jan Beulich wrote:
> >> >>> On 14.12.18 at 12:45, <roger.pau@xxxxxxxxxx> wrote:
> >> > On Fri, Dec 14, 2018 at 03:45:21AM -0700, Jan Beulich wrote:
> >> >> >>> On 14.12.18 at 11:03, <roger.pau@xxxxxxxxxx> wrote:
> >> >> > I expect the interdomain locking as a result of using a paging caller
> >> >> > domain is going to be restricted to the p2m lock of the caller domain,
> >> >> > as a result of the usage of copy to/from helpers.
> >> >> > 
> >> >> > Maybe the less intrusive change would be to just allow locking the
> >> >> > caller p2m lock (that only lock) regardless of the subject domain lock
> >> >> > level?
> >> >> 
> >> >> With caller != subject, and with the lock level then being higher
> >> >> than all "normal" ones, this might be an option. But from the
> >> >> very beginning we should keep the transitive aspect here in
> >> >> mind: If Dom0 controls a PVH domain which controls a HVM one,
> >> >> the Dom0 p2m lock may also need allowing to nest inside the PVH
> >> >> DomU's one, so it'll be a total of two extra new lock levels even
> >> >> if we restrict this to the p2m locks.
> >> > 
> >> > I'm not sure I follow here, so far we have spoken about a subject and
> >> > a caller domain (subject being the target of the operation). In the
> >> > above scenario I see a relation between Dom0 and the PVH domain, and
> >> > between the PVH domain and the HVM one, but not a relation that
> >> > encompasses the three domains in terms of mm locking.
> >> > 
> >> > Dom0 (caller) mm lock could be nested inside the PVH DomU (subject) mm
> >> > locks, and the PVH DomU (caller) locks could be nested inside the HVM
> >> > domain (subject) locks, but I don't see an operation where Dom0 mm
> >> > lock could be nested inside of a PVH DomU lock that's already nested
> >> > inside of the HVM DomU lock.
> >> 
> >> Well, if we're _sure_ this can't happen, then of course we also
> >> don't need to deal with it.
> > 
> > So I've got a patch that adds a positive offset to lock priority when
> > the lock owner is the current domain. This allows locking other
> > domains (subject) mm locks and then take the current domain's mm locks.
> > 
> > There's one slight issue with the page sharing lock, that will always
> > be locked as belonging to a subject domain, due to the fact that
> > there's no clear owner of such lock. AFAICT some page-sharing routines
> > are even run from the idle thread.
> > 
> > I'm attaching such patch below, note it keeps the same lock order as
> > before, but allows taking the caller locks _only_ after having took
> > some of the subject mm locks. Note this doesn't allow interleaved mm
> > locking between the subject and the caller.
> > 
> > The nice part about this patch is that changes are mostly confined to
> > mm-locks.h, there's only an extra change to xen/arch/x86/mm/p2m-pod.c.
> 
> Yes, this looks quite reasonable at the first glance. One thing I
> don't understand is why you compare domain IDs instead of
> struct domain pointers.

This patches started as something else, where I planned to store the
domid, and I just reused it. Comparing domain pointers is indeed
better.

> The other this is that perhaps it would
> be a good idea to factor out into a macro or inline function the
> biasing construct.

Sure.

While there I might also add a pre-patch to convert __check_lock_level
and __set_lock_level into a function instead of a macro.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.