[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 04/15] argo: init, destroy and soft-reset, with enable command line opt



On Mon, Jan 14, 2019 at 9:32 AM Christopher Clark
<christopher.w.clark@xxxxxxxxx> wrote:
>
> On Fri, Jan 11, 2019 at 1:27 AM Roger Pau Monné <royger@xxxxxxxxxxx> wrote:
> > On Fri, Jan 11, 2019 at 7:04 AM Christopher Clark
> > <christopher.w.clark@xxxxxxxxx> wrote:
> > >
> > > On Thu, Jan 10, 2019 at 2:19 AM Roger Pau Monné <royger@xxxxxxxxxxx> 
> > > wrote:
> > > >
> > > >  On Mon, Jan 7, 2019 at 8:44 AM Christopher Clark
> > > > <christopher.w.clark@xxxxxxxxx> wrote:
> > > > > +/*
> > > > > + * Locking is organized as follows:
> > > > > + *
> > > > > + * Terminology: R(<lock>) means taking a read lock on the specified 
> > > > > lock;
> > > > > + *              W(<lock>) means taking a write lock on it.
> > > > > + *
> > > > > + * L1 : The global lock: argo_lock
> > > > > + * Protects the argo elements of all struct domain *d in the system.
> > > > > + * It does not protect any of the elements of d->argo, only their
> > > > > + * addresses.
> > > > > + *
> > > > > + * By extension since the destruction of a domain with a non-NULL
> > > > > + * d->argo will need to free the d->argo pointer, holding W(L1)
> > > > > + * guarantees that no domains pointers that argo is interested in
> > > > > + * become invalid whilst this lock is held.
> > > > > + */
> > > > > +
> > > > > +static DEFINE_RWLOCK(argo_lock); /* L1 */
> >
> > I'm wondering why a global argo_lock shared with all domains is used
> > to protect d->argo, instead of using a per-domain lock (d->argo_lock
> > for example). This global argo_lock shared between all domains is
> > going to introduce contention with no specific benefit AFAICT.
>
> The granular locking structure is motivated by:
> 1) Performance isolation / DoS avoidance
> 2) Teardown of state across multiple domains on domain destroy
> 3) Performance via concurrent operation of rings
>
> Using the single global lock avoids the need for sequencing the
> acquisition of multiple individual per-domain locks (and lower level
> data structure locks) to prevent deadlock: taking W(L1) grants access
> to all and taking R(L1) ensures that teardown of any domain will not
> interfere with any Argo hypercall operation. It supports using the
> granular locks across domains without complicated or fragile lock
> acquisition logic.

I'm not sure such global lock is needed. Isn't it going to introduce a
non-trivial amount of contention?

Please bear with me, iff instead of the global lock a d->arg_lock is
used, is there any use-case where you would need to lock multiple
d->arg_locks in sequence? I'm not sure I see which use-case would
require this, since I expect you take the d->arg_lock, perform the
needed operations on that domain argo data/rings and move to the next
one.

> I've written a document about the locking to add to the tree with the
> series, and a copy is at github here:
>
> https://github.com/dozylynx/xen/blob/0cb95385eba696ecf4856075a524c5e528e60455/docs/misc/argo-locking.md

Thanks. It would have been better to send the contents of the document
to the list, so inline comments can be added. It's hard to comment on
the document now since it's only on github AFAICT.

As a general comment, I think the "Hierarchical Locking Model and
Protocol" is too complex, and the fact that there are multiple
interchangeable lock sequences to write to the rings for example is
not a good locking scheme because it makes it hard to reason about.

For example you can write to the ring by simply write-locking L1, or
by read-locking L1 and L2, and then locking the ring lock (L3). IMO I
would state that writing to the ring _always_ requires L3 to be
locked, regardless of the other locks. This makes it easier to reason
about locking correctness, and which locks protect what data in the
argos structure, if indeed so many different locks are required.

There are also several claims that fine-grainer locking provides
better performance in order to justify the need of such locks. IMO
without providing any evidence of such performance benefit it's hard
to be convinced so many locks are needed.

> > I would recommend an initial implementation that uses a single
> > per-domain lock (ie: d->argo_lock) to protect the whole contents of
> > d->argo, and then go adding more fine grained locking as required,
> > providing evidence that such fine grainer locking is actually
> > improving performance (or required for some other reason). IMO, the
> > current locking scheme is overly complicated, and it's very hard for
> > me to reason about it's correctness.
>
> I've now implemented some macros to describe and document locking
> requirements in the code, and simplify ASSERTing the correct status.
> The majority of functions have a single annotation at entry indicating
> and verifying their locking status. They are described in the doc.
>
> >
> > > > > +void
> > > > > +argo_soft_reset(struct domain *d)
> > > > > +{
> > > > > +    write_lock(&argo_lock);
> > > > > +
> > > > > +    argo_dprintk("soft reset d=%d d->argo=%p\n", d->domain_id, 
> > > > > d->argo);
> > > > > +
> > > > > +    if ( d->argo )
> > > > > +    {
> > > > > +        domain_rings_remove_all(d);
> > > > > +        partner_rings_remove(d);
> > > > > +        wildcard_rings_pending_remove(d);
> > > > > +
> > > > > +        if ( !opt_argo_enabled )
> > > > > +        {
> > > > > +            xfree(d->argo);
> > > > > +            d->argo = NULL;
> > > >
> > > > Can opt_argo_enabled change during runtime?
> > >
> > > Not at the moment, no. It should be made changeable
> > > later, but keeping it fixed assists with derisking this for
> > > release consideration.
> >
> > Then if d->argo is set opt_argo_enabled must be true, and thus this
> > condition is never true?
>
> Yes; ack, have removed this logic.
>
> I'm holding off on posting v4 of the series but my latest tree, with the
> new macros applied, is on github at:
>
> branch:
> https://github.com/dozylynx/xen/tree/staging-argo-2019-01-13
> main file within that branch:
> https://github.com/dozylynx/xen/blob/staging-argo-2019-01-13/xen/common/argo.c

IMO, I'm finding the patch series slightly difficult to review. For
example patch 4 introduces a full-blown argo_domain structure with all
the sub-structures and fields, which I'm not sure all are actually
used in that patch. It would be easier to review if a simpler set of
operations is first introduced, like starting by only allowing single
domain rings, and then introducing multicast rings.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.