[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3 of 4 RFC] xl/remus: Control network buffering in remus callbacks



Shriram Rajagopalan writes ("Re: [Xen-devel] [PATCH 3 of 4 RFC] xl/remus: 
Control network buffering in remus callbacks"):
> On Wed, Aug 7, 2013 at 11:38 AM, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> 
> wrote:

>     > +    usleep(dss->remus_ctx->interval * 1000);
> 
>     This is still pretty bad, surely ?
...
> Was thinking of updating this in a separate patch.. But I guess since this 
> line
> is changing, might as well do it in this patch.

I'm happy with it in a separate patch.  Doing it in a separate patch
probably makes things clearer.  This patch does have a TODO about it.

>     > +    rtnl_link_put(ifb);
>     > +    return remus_ctx;
>     > +
>     > + end:
>     > +    if (ifb) rtnl_link_put(ifb);
>     > +    if (qdisc_cache) nl_cache_free(qdisc_cache);
>     > +    if (nlsock) nl_close(nlsock);
> 
>     I think this can perhaps leak remus_ctx and qdisc.
...
> There is no issue of leaking qdisc. Because it simply obtains references from
> qdisc_cache, which is freed if things fail. 

Perhaps this is worth a comment ?

> WRT remus_ctx, I had intentionally skipped the free call. 
>  Its allocated in the GC context. [the same one used by domain_suspend, etc].
> And if this call fails, then the parent call libxl_domain_remus_start will
> fail.

Sorry, yes.

> I assumed that when libxl_free_all was called at this point, remus_ctx would
> also be
> freed. Am I missing something in the overall control flow, that would cause
> this leak ?

No, I was confused.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.