[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] x86/mem-sharing: Bulk mem-sharing entire domains



>>> On 09.10.15 at 19:55, <tamas@xxxxxxxxxxxxx> wrote:
> On Fri, Oct 9, 2015 at 1:51 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> 
>> >>> On 08.10.15 at 22:57, <tamas@xxxxxxxxxxxxx> wrote:
>> > --- a/xen/arch/x86/mm/mem_sharing.c
>> > +++ b/xen/arch/x86/mm/mem_sharing.c
>> > @@ -1293,6 +1293,37 @@ int relinquish_shared_pages(struct domain *d)
>> >      return rc;
>> >  }
>> >
>> > +static int bulk_share(struct domain *d, struct domain *cd, unsigned
>> long max,
>> > +                      struct mem_sharing_op_bulk_share *bulk)
>> > +{
>> > +    int rc = 0;
>> > +    shr_handle_t sh, ch;
>> > +
>> > +    while( bulk->start <= max )
>> > +    {
>> > +        if ( mem_sharing_nominate_page(d, bulk->start, 0, &sh) != 0 )
>> > +            goto next;
>> > +
>> > +        if ( mem_sharing_nominate_page(cd, bulk->start, 0, &ch) != 0 )
>> > +            goto next;
>> > +
>> > +        if ( !mem_sharing_share_pages(d, bulk->start, sh, cd,
>> bulk->start, ch) )
>> > +            ++(bulk->shared);
>>
>> Pointless parentheses.
>>
>>
> Pointless but harmless and I like this style better.

Well, it's code you're the maintainer for, so you know, but generally
I think parentheses should be used to clarify things, not to make
code harder to read (the effect is minimal here, but extended to a
more complex expression this may well matter).

>> > +        ++(bulk->start);
>> > +
>> > +        /* Check for continuation if it's not the last iteration. */
>> > +        if ( bulk->start < max && hypercall_preempt_check() )
>> > +        {
>> > +            rc = 1;
>> > +            break;
>>
>> This could simple be a return statement, avoiding the need for a
>> local variable (the type of which would need to be changed, see
>> below).
> 
> It's reused in the caller to indicate where the mso copyback happens and rc
> is of type int in the caller.

But you're actually abusing the int nature of rc in the caller to store
a boolean value. I'd really like to see this be done consistently -
either use another variable (or, as suggested, no intermediate
variable in the caller at all), or (also taking into consideration Andrew's
comments) propagate an actual -E value from here (along with not
losing errors).

>> > +            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
>> > +            if ( rc )
>> > +            {
>> > +                rcu_unlock_domain(cd);
>> > +                goto out;
>> > +            }
>> > +
>> > +            if ( !mem_sharing_enabled(cd) )
>> > +            {
>> > +                rcu_unlock_domain(cd);
>> > +                rc = -EINVAL;
>> > +                goto out;
>> > +            }
>> > +
>> > +            max_sgfn = domain_get_maximum_gpfn(d);
>> > +            max_cgfn = domain_get_maximum_gpfn(cd);
>> > +
>> > +            if ( max_sgfn != max_cgfn || max_sgfn < mso.u.bulk.start )
>>
>> Are both domains required to be paused for this operation? If so,
>> shouldn't you enforce this? If not, by the time you reach the if()
>> the values are stale.
> 
> It's expected that the user has exclusive tool-side lock on the domains
> before issuing this hypercall and that the domains are paused already.

As said -  in that case, please enforce this (by bailing if not so).

>> > +            {
>> > +                rcu_unlock_domain(cd);
>> > +                rc = -EINVAL;
>> > +                goto out;
>> > +            }
>> > +
>> > +            rc = bulk_share(d, cd, max_sgfn, &mso.u.bulk);
>> > +            if ( rc )
>>
>> With the boolean nature the use of "rc" here is rather confusing;
>> I'd suggest avoiding the use of in intermediate variable in this case.
>>
> 
> I don't see where the confusion is - rc indicates there is work left to do
> and hypercall continuation needs to be setup. I could move this block into
> bulk_share itself.

Not sure that would help. The confusion, just to clarify, results from
storing two different kinds of values (-E error indicator vs boolean) in
the same variable.

>> > --- a/xen/include/public/memory.h
>> > +++ b/xen/include/public/memory.h
>> > @@ -447,6 +447,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t);
>> >  #define XENMEM_sharing_op_debug_gref        5
>> >  #define XENMEM_sharing_op_add_physmap       6
>> >  #define XENMEM_sharing_op_audit             7
>> > +#define XENMEM_sharing_op_bulk_share        8
>> >
>> >  #define XENMEM_SHARING_OP_S_HANDLE_INVALID  (-10)
>> >  #define XENMEM_SHARING_OP_C_HANDLE_INVALID  (-9)
>> > @@ -482,7 +483,13 @@ struct xen_mem_sharing_op {
>> >              uint64_aligned_t client_gfn;    /* IN: the client gfn */
>> >              uint64_aligned_t client_handle; /* IN: handle to the client
>> page */
>> >              domid_t  client_domain; /* IN: the client domain id */
>> > -        } share;
>> > +        } share;
>> > +        struct mem_sharing_op_bulk_share {   /* OP_BULK_SHARE */
>>
>> Is the _share suffix really useful here? Even more, if you dropped
>> it and renamed "shared" below to "done", the same structure could
>> be used for a hypothetical bulk-unshare operation.
> 
> I don't really have a use-case for that at the moment and having it simply
> as "bulk" is not specific enough IMHO.

I tend to disagree, together wit OP_BULK_SHARE the structure
name doesn't need to be mode specific - as said, it can easily serve
for all kinds of bulk operations.

>> > +            domid_t client_domain;           /* IN: the client domain
>> id */
>> > +            uint64_aligned_t start;          /* IN: start gfn (normally
>> 0) */
>>
>> Marking this as just IN implies that the value doesn't change across
>> the operation.
> 
> In my book IN means it's used strictly only to pass input and it's value
> may or may not be the same afterwards.

I think our annotations are pretty consistent about marking
modified fields as IN/OUT. (Otoh we don't normally modify fields
when their value is of no relevance to the caller.)

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.