[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RESEND 09/12] libxc: numa-sched: enable getting/specifying per-vcpu node-affinity



On mar, 2013-11-12 at 11:01 -0500, Konrad Rzeszutek Wilk wrote:
> On Tue, Nov 05, 2013 at 03:35:42PM +0100, Dario Faggioli wrote:
> > by providing the proper get/set interfaces and wiring them
> > to the new domctl-s from the previous commit.
> 
> s/previous commit/<title of the commit>/
> 
Ok.

> > diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> > +int xc_vcpu_setnodeaffinity(xc_interface *xch,
> > +                            uint32_t domid,
> > +                            int vcpu,
> > +                            xc_nodemap_t nodemap)
> > +{
> > +    DECLARE_DOMCTL;
> > +    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> > +    int ret = -1;
> 
> Ewww.. Could we just use regular -Exx for new xc_* calls?
> 
Mmm... As George said already, that is the most common behavior in
libxc... If we want something different, we should really write it down
somewhere (or has that happened already and I'm missing it?).

Also, yes, this does return -1 in cases where I don't have an error code
provided by the call that is actually failing, and yes, I totally rely
on it to set errno properly.

In the main operation implemented here, I just return the output of
do_domctl(), which, again, may be -1, may be some err-code, but it
really looks like it is what every other function there does, and, TBH,
it also seems the very most sane thing to do to me.

> > +int xc_vcpu_getnodeaffinity(xc_interface *xch,
> > +                            uint32_t domid,
> > +                            int vcpu,
> > +                            xc_nodemap_t nodemap)
> > +{
> > +    DECLARE_DOMCTL;
> > +    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> > +    int ret = -1;
> > +    int nodesize;
> > +
> > +    nodesize = xc_get_nodemap_size(xch);
> > +    if (!nodesize)
> > +    {
> > +        PERROR("Could not get number of nodes");
> > +        goto out;
> > +    }
> > +
> > +    local = xc_hypercall_buffer_alloc(xch, local, nodesize);
> > +    if (local == NULL)
> > +    {
> > +        PERROR("Could not allocate memory for getvcpunodeaffinity domctl 
> > hypercall");
> > +        goto out;
> > +    }
> > +
> > +    domctl.cmd = XEN_DOMCTL_getvcpunodeaffinity;
> > +    domctl.domain = (domid_t)domid;
> > +    domctl.u.vcpuaffinity.vcpu = vcpu;
> > +
> > +    set_xen_guest_handle(domctl.u.vcpuaffinity.map.bitmap, local);
> > +    domctl.u.vcpuaffinity.map.nr_bits = nodesize * 8;
> 
> Could the 8 be replaced by a sizeof?
>
I guess it could... What was it that you had in mind in particular?

Personally, I consider the names 'bitmap' and 'nr_bits' talking enough
to feel comfortable with the 8... To the point that I think the
following would be even less readable:

 domctl.u.vcpuaffinity.map.nr_bits = nodesize * 
        sizeof(domctl.u.vcpuaffinity.cpumap.bitmap);

But, of course, I'd do it that way if a maintainer asks for that.

> > diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> > index 8cf3f3b..208fa2c 100644
> > --- a/tools/libxc/xenctrl.h
> > +++ b/tools/libxc/xenctrl.h
> > @@ -551,6 +551,25 @@ int xc_domain_node_getaffinity(xc_interface *xch,
> >                                 uint32_t domind,
> >                                 xc_nodemap_t nodemap);
> >  
> > +/**
> > + * These functions set and retrieves the NUMA node-affinity
> > + * of a specific vcpu.
> > + *
> > + * @parm xch a handle to an open hypervisor interface.
> > + * @parm domid the domain id one is interested in.
> > + * @parm vcpu the vcpu one wants to set/get the affinity of.
> > + * @parm nodemap the map of the affine nodes.
> > + * @return 0 on success, -1 on failure.
> 
> and something in errno?
>
Right. I'll add a mention to that in the comment.

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.