[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xl vcpu-pin peculiarities in core scheduling mode



On 24.03.20 14:34, Sergey Dyasli wrote:
Hi Juergen,

I've notived there is no documentation about how vcpu-pin is supposed to work
with core scheduling enabled. I did some experiments and noticed the following
inconsistencies:

   1. xl vcpu-pin 5 0 0
      Windows 10 (64-bit) (1)              5     0    0   -b-    1644.0  0 / all
      Windows 10 (64-bit) (1)              5     1    1   -b-    1650.1  0 / all
                                                      ^                  ^
      CPU 1 doesn't match reported hard-affinity of 0. Should this command set
      hard-affinity of vCPU 1 to 1? Or should it be 0-1 for both vCPUs instead?


   2. xl vcpu-pin 5 0 1
      libxl: error: libxl_sched.c:62:libxl__set_vcpuaffinity: Domain 5:Setting 
vcpu affinity: Invalid argument
      This is expected but perhaps needs documenting somewhere?


   3. xl vcpu-pin 5 0 1-2
      Windows 10 (64-bit) (1)              5     0    2   -b-    1646.7  1-2 / 
all
      Windows 10 (64-bit) (1)              5     1    3   -b-    1651.6  1-2 / 
all
                                                      ^                  ^^^
      Here is a CPU / affinity mismatch again, but the more interesting fact
      is that setting 1-2 is allowed at all, I'd expect CPU would never be set
      to 1 with such settings.

Please let me know what you think about the above cases.

I think all of the effects can be explained by the way how pinning with
core scheduling is implemented. This does not mean that the information
presented to the user shouldn't be adapted.

Basically pinning of any vcpu will just affect the "master"-vcpu of a
virtual core (sibling 0). It will happily accept any setting as long as
any "master"-cpu of a core is in the resulting set of cpus.

All vcpus of a virtual core share the same pinnings.

I think this explains all of the above scenarios.

IMO there are the following possibilities for reporting those pinnings
to the user:

1. As today, documenting the output.
   Not very nice IMO, but the least effort.

2. Just print one line for each virtual cpu/core/socket, like:
   Windows 10 (64-bit) (1)    5     0-1   0-1   -b-    1646.7  0-1 / all
   This has the disadvantage of dropping the per-vcpu time in favor of
   per-vcore time, OTOH this is reflecting reality.

3. Print the effective pinnings:
   Windows 10 (64-bit) (1)    5     0     0     -b-    1646.7  0   / all
   Windows 10 (64-bit) (1)    5     1     1     -b-    1646.7  1   / all
   Should be rather easy to do.

Thoughts?


Juergen



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.