[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v15 3/7] remus netbuffer: implement remus network buffering for nic devices



Sender: xen-devel-bounces@xxxxxxxxxxxxx
On-Behalf-Of: Ian.Jackson@xxxxxxxxxxxxx
Subject: Re: [Xen-devel] [PATCH v15 3/7] remus netbuffer: implement remus 
network buffering for nic devices
Message-Id: <21438.53426.579003.941392@xxxxxxxxxxxxxxxxxxxxxxxx>
Recipient: ibudiman@xxxxxxxxxxxxx
--- Begin Message ---
Hongyang Yang writes ("Re: [PATCH v15 3/7] remus netbuffer: implement remus 
network buffering for nic devices"):
> On 07/10/2014 07:15 AM, Ian Jackson wrote:
> > It occurs to me to ask: Is it wise to use such a bare name for the
> > variable IFB ?  I would suggest that it might be better to use a
> > variable with XEN or REMUS or something in it.  Otherwise perhaps
> > someone somewhere might think IFB stood for something else and would
> > set it to something irrelevant (or your setting of it might break
> > their usage...)
> 
> Maybe REMUS_IFB?

Sounds good to me.

> >> +struct libxl__remus_netbuf_state {
> >> +    libxl__ao *ao;
> >> +    uint32_t domid;
> >> +    const char *netbufscript;
> >
> > Why is there a copy of netbufscript here ?
> 
> We use netbuf state in device concrete layer, so we made a copy
> of this.

What I mean is, why does the concrete layer not have access to the
remus_state ?  Is there something wrong with passing it the
remus_state ?

> >> +/*----- init() and destroy() -----*/
> >> +
> >> +static int nic_init(const libxl__remus_device_ops *self,
> >> +                    libxl__remus_state *rs)
> >> +{
> >
> > This function seems to leak the things it allocates, on error.
> > The `out' section should probably call destroy.
> 
> If this failed, we will call teardown, and in teardown, the
> nic_destroy will be called.

OIC.  I think this needs to be documented in a comment in
libxl__remus_device_ops.  Normally one wouldn't expect to have to call
destroy() if init() fails.

> > Last time I commented on the error handling in this patch.  I see that
> > you have added an nl_geterror in some places.
> >
> > When I make a comment like that, you need to apply it whereever
> > relevant in the code, and to the whole patch series.
> 
> Yes, actually I've checked the whole patch about this and
> corrected them. for this special case, nl_socket_alloc() doesn't
> return an error number, so we can't use nl_geterror, it returns
> NULL if alloc failed, so I think this error message should be ok.

Err, OK.

> >> +static void nic_destroy(const libxl__remus_device_ops *self,
> >> +                        libxl__remus_state *rs)
> > ...
> >> +    /* free qdisc cache */
> >> +    if (ns->qdisc_cache) {
> >> +        nl_cache_clear(ns->qdisc_cache);
> >> +        nl_cache_free(ns->qdisc_cache);
> >
> > Can these functions fail ?  If so, what does it mean and what should
> > the program do ?
> 
> No, they both return void.

Oh, good.

> >> +static int remus_netbuf_op(libxl__remus_device_nic *remus_nic,
> >> +                           libxl__remus_netbuf_state *netbuf_state,
> >> +                           int buffer_op)
> >> +{
> >> +    int rc;
> >> +
> >> +    STATE_AO_GC(netbuf_state->ao);
> >> +
> >> +    if (buffer_op == tc_buffer_start)
> >> +        rc = rtnl_qdisc_plug_buffer(remus_nic->qdisc);
> >> +    else
> >> +        rc = rtnl_qdisc_plug_release_one(remus_nic->qdisc);
> >
> > Error handling again.
> 
> I can't quite follow you, what's wrong with the error handling in this
> function except the name of the "rc"?

In fact reading it again I was confused by the fact that this function
doesn't actually return rc.  Sorry about that - but I guess this shows
why using the conventional names etc. is important.

> > How does rtnl_link_get_ifindex report its error code ?  Or can it fail
> > only one way ?
> 
> It Returns Interface index or 0 if not set. No error code will be returned.

Oh.  The error handling in this rtnl facility you're using does seem
rather poor - but of course this isn't your fault.  Thanks for
clarifying.

Thanks for your comprehensive reply.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


--- End Message ---
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.