[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 4/9] xen: introduce XEN_DOMCTL_devour



>>> On 11.12.14 at 14:45, <vkuznets@xxxxxxxxxx> wrote:
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -1177,6 +1177,39 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
> u_domctl)
>      }
>      break;
>  
> +    case XEN_DOMCTL_devour:
> +    {
> +        struct domain *recipient_dom;
> +
> +        if ( !d->recipient )
> +        {
> +            recipient_dom = get_domain_by_id(op->u.devour.recipient);
> +            if ( recipient_dom == NULL )
> +            {
> +                ret = -ESRCH;
> +                break;
> +            }
> +
> +            if ( recipient_dom->tot_pages != 0 )
> +            {
> +                put_domain(recipient_dom);
> +                ret = -EINVAL;
> +                break;
> +            }
> +            /*
> +             * Make sure no allocation/remapping is ongoing and set is_dying
> +             * flag to prevent such actions in future.
> +             */
> +            spin_lock(&d->page_alloc_lock);
> +            d->is_dying = DOMDYING_locked;
> +            d->recipient = recipient_dom;

Is d == recipient_dom a valid case (not leading to any issues)?

> +            smp_wmb(); /* make sure recipient was set before domain_kill() */

The spin_unlock() guarantees ordering already.

> +            spin_unlock(&d->page_alloc_lock);
> +        }
> +        ret = domain_kill(d);

Do you really mean to simply kill the domain when d->recipient was
already set on entry?

Also I would strongly suggest having this fall through into the
XEN_DOMCTL_destroydomain case - just look there for what code
you're missing right now. Of course the continuation then
shouldn't go through the whole if() above anymore, i.e. you may
want to permit the operation to succeed when d->recipient ==
recipient_dom.

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1707,6 +1707,7 @@ void free_domheap_pages(struct page_info *pg, unsigned 
> int order)
>  {
>      struct domain *d = page_get_owner(pg);
>      unsigned int i;
> +    unsigned long mfn, gmfn;

Please add these variable in the scope you need them in.

> @@ -1764,13 +1765,32 @@ void free_domheap_pages(struct page_info *pg, 
> unsigned int order)
>              scrub = 1;
>          }
>  
> -        if ( unlikely(scrub) )
> -            for ( i = 0; i < (1 << order); i++ )
> -                scrub_one_page(&pg[i]);
> +        if ( !d || !d->recipient || d->recipient->is_dying )
> +        {
> +            if ( unlikely(scrub) )
> +                for ( i = 0; i < (1 << order); i++ )
> +                    scrub_one_page(&pg[i]);
>  
> -        free_heap_pages(pg, order);
> +            free_heap_pages(pg, order);
> +        }
> +        else
> +        {
> +            mfn = page_to_mfn(pg);
> +            gmfn = mfn_to_gmfn(d, mfn);
> +
> +            page_set_owner(pg, NULL);

This needs to be done more than once when order > 0, or else
you may trigger an ASSERT() in assign_pages().

> +            if ( assign_pages(d->recipient, pg, order, 0) )
> +                /* assign_pages reports the error by itself */
> +                goto out;
> +
> +            if ( guest_physmap_add_page(d->recipient, gmfn, mfn, order) )
> +                printk(XENLOG_G_INFO
> +                       "Failed to add MFN %lx (GFN %lx) to Dom%d's 
> physmap\n",
> +                       mfn, gmfn, d->recipient->domain_id);

And that's it? I understand that you can't propagate the error, but
wouldn't you better crash the recipient domain instead of leaving it
in an inconsistent state?

Also the message would better be more specific - e.g.

"Failed to re-add Dom%d's GFN %lx (MFN %lx) to Dom%d\n"

> +        }
>      }
>  
> +out:

You could do well without this label (in particular I think what you add
as else path would better move into to if() case several lines up, in
which case the use of a label may then indeed be warranted), but if
you really need one, please make sure it is indented by at least one
space.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.