[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] sched_credit2.c: runqueue_per_core code



On 03/18/2015 03:59 PM, Jan Beulich wrote:
>>>> On 18.03.15 at 16:26, <george.dunlap@xxxxxxxxxxxxx> wrote:
>> In both cases there's a slight risk in using system_state to determine
>> whether to rely on cpu_data[] or not, because there's actually a window
>> for each processor after system_state == SYS_STATE_smp_boot where
>> cpu_data[] is *not* initialized, but it's not obvious from looking at
>> the data itself.  If the callback mechanisms ever change order with the
>> cpu_data[] initializations in the future, we risk a situation where
>> credit2 silently regresses to using a single massive runqueue.
> 
> But such an ordering change is extremely unlikely, since the
> CPU_STARTING notification specifically tells you that the given
> CPU is now ready for normal use, which includes it having its
> topology related data set up.

I didn't mean so much that CPU_STARTING might change meaning, but that
someone looking only at schedule.c might think that it would make sense
to call alloc_pdata() somewhere else, before cpu_data[] had been
populated.  Even if they look at init_pcpu() in credit2, they're
unlikely to know that cpu_to_socket() isn't valid until after
boot_secondary() has happened; and if they try it and boot it,
everything will seem to work perfectly for credit2 -- except that there
will be a single global runqueue.

If you, Dario and I don't happen to catch it -- or don't happen to
remember the exact details of the constraints -- nobody may notice until
a year later.

This is the point of having ASSERTs -- so that we don't have to worry
about remembering and catching all these crazy constraints.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.