[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable



On 18.01.2021 12:55, Juergen Gross wrote:
> Make /cpupool/<id>/sched-gran in hypfs writable. This will enable per
> cpupool selectable scheduling granularity.
> 
> Writing this node is allowed only with no cpu assigned to the cpupool.
> Allowed are values "cpu", "core" and "socket".
> 
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>

Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
with two small adjustment requests:

> @@ -85,36 +85,43 @@ static int __init sched_select_granularity(const char 
> *str)
>      {
>          if ( strcmp(sg_name[i].name, str) == 0 )
>          {
> -            opt_sched_granularity = sg_name[i].mode;
> +            *mode = sg_name[i].mode;
>              return 0;
>          }
>      }
>  
>      return -EINVAL;
>  }
> +
> +static int __init sched_select_granularity(const char *str)
> +{
> +    return sched_gran_get(str, &opt_sched_granularity);
> +}
>  custom_param("sched-gran", sched_select_granularity);
> +#elif CONFIG_HYPFS

Missing defined().

> @@ -1126,17 +1136,55 @@ static unsigned int hypfs_gran_getsize(const struct 
> hypfs_entry *entry)
>      return strlen(gran) + 1;
>  }
>  
> +static int cpupool_gran_write(struct hypfs_entry_leaf *leaf,
> +                              XEN_GUEST_HANDLE_PARAM(const_void) uaddr,
> +                              unsigned int ulen)
> +{
> +    const struct hypfs_dyndir_id *data;
> +    struct cpupool *cpupool;
> +    enum sched_gran gran;
> +    unsigned int sched_gran = 0;
> +    char name[SCHED_GRAN_NAME_LEN];
> +    int ret = 0;
> +
> +    if ( ulen > SCHED_GRAN_NAME_LEN )
> +        return -ENOSPC;
> +
> +    if ( copy_from_guest(name, uaddr, ulen) )
> +        return -EFAULT;
> +
> +    if ( memchr(name, 0, ulen) == (name + ulen - 1) )
> +        sched_gran = sched_gran_get(name, &gran) ?
> +                     0 : cpupool_check_granularity(gran);
> +    if ( sched_gran == 0 )
> +        return -EINVAL;
> +
> +    data = hypfs_get_dyndata();
> +    cpupool = data->data;
> +    ASSERT(cpupool);
> +
> +    if ( !cpumask_empty(cpupool->cpu_valid) )
> +        ret = -EBUSY;
> +    else
> +    {
> +        cpupool->gran = gran;
> +        cpupool->sched_gran = sched_gran;
> +    }

I think this could do with a comment clarifying what lock this
is protected by, as the function itself has no sign of any
locking, not even an assertion that a certain lock is being held.
If you were to suggest some text, this as well as the minor issue
above could likely be taken care of while committing.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.