[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 10/14] libxc: get and set soft and hard affinity



On mar, 2013-11-19 at 17:08 +0000, Ian Campbell wrote:
> On Mon, 2013-11-18 at 19:18 +0100, Dario Faggioli wrote:
> 
> > Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> 
> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
> 
Cool.

> There are a few preexisting issues with the setaffinity function, but
> this just duplicates them into the new cpumap, so I don't see any point
> in holding up the series for them. Perhaps you could put them on your
> todo list?
> 
I certainly can.

> > diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> > index f9ae4bf..bddf4e0 100644
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -192,44 +192,52 @@ int xc_domain_node_getaffinity(xc_interface *xch,
> >  int xc_vcpu_setaffinity(xc_interface *xch,
> >                          uint32_t domid,
> >                          int vcpu,
> > -                        xc_cpumap_t cpumap)
> > +                        xc_cpumap_t cpumap,
> > +                        uint32_t flags,
> > +                        xc_cpumap_t ecpumap_out)
> >  {
> >      DECLARE_DOMCTL;
> > -    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> > +    DECLARE_HYPERCALL_BUFFER(uint8_t, cpumap_local);
> > +    DECLARE_HYPERCALL_BUFFER(uint8_t, ecpumap_local);
> >      int ret = -1;
> >      int cpusize;
> >  
> >      cpusize = xc_get_cpumap_size(xch);
> > -    if (!cpusize)
> > +    if ( !cpusize )
> >      {
> >          PERROR("Could not get number of cpus");
> > -        goto out;
> > +        return -1;;
> 
> Double ";;"?
> 
Ouch... I'll have to respin this series (and will do that shortly), so
I'll have a chance to fix this.

> >      }
> >  
> > -    local = xc_hypercall_buffer_alloc(xch, local, cpusize);
> > -    if ( local == NULL )
> > +    cpumap_local = xc_hypercall_buffer_alloc(xch, cpumap_local, cpusize);
> > +    ecpumap_local = xc_hypercall_buffer_alloc(xch, ecpumap_local, cpusize);
> > +    if ( cpumap_local == NULL || cpumap_local == NULL)
> >      {
> > -        PERROR("Could not allocate memory for setvcpuaffinity domctl 
> > hypercall");
> > +        PERROR("Could not allocate hcall buffers for 
> > DOMCTL_setvcpuaffinity");
> >          goto out;
> >      }
> >  
> >      domctl.cmd = XEN_DOMCTL_setvcpuaffinity;
> >      domctl.domain = (domid_t)domid;
> >      domctl.u.vcpuaffinity.vcpu = vcpu;
> > -    /* Soft affinity is there, but not used anywhere for now, so... */
> > -    domctl.u.vcpuaffinity.flags = XEN_VCPUAFFINITY_HARD;
> > -
> > -    memcpy(local, cpumap, cpusize);
> > -
> > -    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
> > +    domctl.u.vcpuaffinity.flags = flags;
> >  
> > +    memcpy(cpumap_local, cpumap, cpusize);
> 
> This risks running of the end of the supplies cpumap, if it is smaller
> than cpusize.
> 
I see. Added to my todo list. :-)

> But more importantly why is this not using the hypercall buffer bounce
> mechanism?
> 
I happen to have investigated that (with the aim of doing the switch).

So, AFAICT, declaring a bounce buffer requires the size of it to be
known at declaration time, is that correct?

OTOH, looks like plain hypercall buffer is not interested in that until
you get to xc_buffer_alloc() it.

In this case, I don't have the size of the buffer until I get to call
xc_get_cpumap_size(), and that's why I gave up turning this (and the new
one I'm introducing) into bouncing buffers.

If I'm missing/misunderstanding something, and it's actually possible to
do so (how?), I'd be glad to.

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.