[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] SEDF: avoid gathering vCPU-s on pCPU0



>>> On 04.03.13 at 07:48, Juergen Gross <juergen.gross@xxxxxxxxxxxxxx> wrote:
> On 01.03.2013 16:35, Jan Beulich wrote:
>> The introduction of vcpu_force_reschedule() in 14320:215b799fa181 was
>> incompatible with the SEDF scheduler: Any vCPU using
>> VCPUOP_stop_periodic_timer (e.g. any vCPU of half way modern PV Linux
>> guests) ends up on pCPU0 after that call. Obviously, running all PV
>> guests' (and namely Dom0's) vCPU-s on pCPU0 causes problems for those
>> guests rather sooner than later.
>>
>> So the main thing that was clearly wrong (and bogus from the beginning)
>> was the use of cpumask_first() in sedf_pick_cpu(). It is being replaced
>> by a construct that prefers to put back the vCPU on the pCPU that it
>> got launched on.
>>
>> However, there's one more glitch: When reducing the affinity of a vCPU
>> temporarily, and then widening it again to a set that includes the pCPU
>> that the vCPU was last running on, the generic scheduler code would not
>> force a migration of that vCPU, and hence it would forever stay on the
>> pCPU it last ran on. Since that can again create a load imbalance, the
>> SEDF scheduler wants a migration to happen regardless of it being
>> apparently unnecessary.
>>
>> Of course, an alternative to checking for SEDF explicitly in
>> vcpu_set_affinity() would be to introduce a flags field in struct
>> scheduler, and have SEDF set a "always-migrate-on-affinity-change"
>> flag.
> 
> Or something like this? I don't like the test for sedf in schedule.c

As said - to me a flag would seem more appropriate than another
indirect call. But for now I'll commit as is - feel free to submit an
incremental patch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.