[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] xen/sched: don't disable scheduler on cpus during suspend


  • To: Dario Faggioli <dfaggioli@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Sat, 16 Mar 2019 14:14:42 +0100
  • Autocrypt: addr=jgross@xxxxxxxx; prefer-encrypt=mutual; keydata= mQENBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAG0H0p1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT6JATkEEwECACMFAlOMcK8CGwMH CwkIBwMCAQYVCAIJCgsEFgIDAQIeAQIXgAAKCRCw3p3WKL8TL8eZB/9G0juS/kDY9LhEXseh mE9U+iA1VsLhgDqVbsOtZ/S14LRFHczNd/Lqkn7souCSoyWsBs3/wO+OjPvxf7m+Ef+sMtr0 G5lCWEWa9wa0IXx5HRPW/ScL+e4AVUbL7rurYMfwCzco+7TfjhMEOkC+va5gzi1KrErgNRHH kg3PhlnRY0Udyqx++UYkAsN4TQuEhNN32MvN0Np3WlBJOgKcuXpIElmMM5f1BBzJSKBkW0Jc Wy3h2Wy912vHKpPV/Xv7ZwVJ27v7KcuZcErtptDevAljxJtE7aJG6WiBzm+v9EswyWxwMCIO RoVBYuiocc51872tRGywc03xaQydB+9R7BHPuQENBFOMcBYBCADLMfoA44MwGOB9YT1V4KCy vAfd7E0BTfaAurbG+Olacciz3yd09QOmejFZC6AnoykydyvTFLAWYcSCdISMr88COmmCbJzn sHAogjexXiif6ANUUlHpjxlHCCcELmZUzomNDnEOTxZFeWMTFF9Rf2k2F0Tl4E5kmsNGgtSa aMO0rNZoOEiD/7UfPP3dfh8JCQ1VtUUsQtT1sxos8Eb/HmriJhnaTZ7Hp3jtgTVkV0ybpgFg w6WMaRkrBh17mV0z2ajjmabB7SJxcouSkR0hcpNl4oM74d2/VqoW4BxxxOD1FcNCObCELfIS auZx+XT6s+CE7Qi/c44ibBMR7hyjdzWbABEBAAGJAR8EGAECAAkFAlOMcBYCGwwACgkQsN6d 1ii/Ey9D+Af/WFr3q+bg/8v5tCknCtn92d5lyYTBNt7xgWzDZX8G6/pngzKyWfedArllp0Pn fgIXtMNV+3t8Li1Tg843EXkP7+2+CQ98MB8XvvPLYAfW8nNDV85TyVgWlldNcgdv7nn1Sq8g HwB2BHdIAkYce3hEoDQXt/mKlgEGsLpzJcnLKimtPXQQy9TxUaLBe9PInPd+Ohix0XOlY+Uk QFEx50Ki3rSDl2Zt2tnkNYKUCvTJq7jvOlaPd6d/W0tZqpyy7KVay+K4aMobDsodB3dvEAs6 ScCnh03dDAFgIq5nsB11j3KPKdVoPlfucX2c7kGNH+LUMbzqV6beIENfNexkOfxHf4kBrQQY AQgAIBYhBIUSZ3Lo9gSUpdCX97DendYovxMvBQJa3fDQAhsCAIEJELDendYovxMvdiAEGRYI AB0WIQRTLbB6QfY48x44uB6AXGG7T9hjvgUCWt3w0AAKCRCAXGG7T9hjvk2LAP99B/9FenK/ 1lfifxQmsoOrjbZtzCS6OKxPqOLHaY47BgEAqKKn36YAPpbk09d2GTVetoQJwiylx/Z9/mQI CUbQMg1pNQf9EjA1bNcMbnzJCgt0P9Q9wWCLwZa01SnQWFz8Z4HEaKldie+5bHBL5CzVBrLv 81tqX+/j95llpazzCXZW2sdNL3r8gXqrajSox7LR2rYDGdltAhQuISd2BHrbkQVEWD4hs7iV 1KQHe2uwXbKlguKPhk5ubZxqwsg/uIHw0qZDk+d0vxjTtO2JD5Jv/CeDgaBX4Emgp0NYs8IC UIyKXBtnzwiNv4cX9qKlz2Gyq9b+GdcLYZqMlIBjdCz0yJvgeb3WPNsCOanvbjelDhskx9gd 6YUUFFqgsLtrKpCNyy203a58g2WosU9k9H+LcheS37Ph2vMVTISMszW9W8gyORSgmw==
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
  • Delivery-date: Sat, 16 Mar 2019 13:15:03 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 16/03/2019 04:01, Dario Faggioli wrote:
> On Thu, 2019-03-14 at 10:59 +0100, Juergen Gross wrote:
>> Today there is special handling in cpu_disable_scheduler() for
>> suspend
>> by forcing all vcpus to the boot cpu. In fact there is no need for
>> that
>> as during resume the vcpus are put on the correct cpus again.
>>
> So, basically, you're saying that moving the vcpus to BSP / CPU0,
> before suspending is not necessary, right?

Right.

> I thought it was, because going into suspend involves things like
> cpu_schedule_down(), which does, among other things,
> SCHED_OP(free_pdata) for freeing the scheduler's data structs, and
> hence we did not want vcpus "around".

Suspend is done from a tasklet with all domains "frozen". So there is no
active vcpu other than the idle ones in the system, meaning that there
is no vcpu in any runqueue.

> 
>> So we can just omit the call of cpu_disable_scheduler() when
>> offlining
>> a cpu due to suspend and on resuming we can omit taking the schedule
>> lock for selecting the new processor.
>>
> Well, in theory, we should hold the lock when for using
> 'cpumask_scratch_cpu(cpu)'. Not sure this is an actual problem in this
> case, but still...

I think there is no problem as there is no scheduling activity.

>> In restore_vcpu_affinity() we should be careful when applying
>> affinity
>> as the cpu might not have come back to life. 
>>
> But restore_vcpu_affinity() is done, in a loop, for all vcpus of all
> domains, in thaw_domains(), which in turn comes after
> enable_nonboot_cpus(), which I think is supposed to bring every CPU
> back up.... What am I missing?

There is no guarantee all cpus will be up after suspend. Some might
fail coming up.

> 
>> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
>> index e89bb67e71..7b5ce18426 100644
>> --- a/xen/common/cpupool.c
>> +++ b/xen/common/cpupool.c
>> @@ -313,7 +313,7 @@ static long cpupool_unassign_cpu_helper(void
>> *info)
>>       * cpu_disable_scheduler(), and at the bottom of this function.
>>       */
>>      rcu_read_lock(&domlist_read_lock);
>> -    ret = cpu_disable_scheduler(cpu);
>> +    ret = (system_state == SYS_STATE_suspend) ? 0 :
>> cpu_disable_scheduler(cpu);
>>
> Mmm... How can this function be called with state = STATE_suspend ?

Meh, my bad. I masked the wrong call of cpu_disable_scheduler(). Should
have been that from __cpu_disable().

> 
>> @@ -735,31 +708,36 @@ void restore_vcpu_affinity(struct domain *d)
>>  
>>          ASSERT(!vcpu_runnable(v));
>>  
>> -        lock = vcpu_schedule_lock_irq(v);
>> -
>> -        if ( v->affinity_broken )
>> -        {
>> -            sched_set_affinity(v, v->cpu_hard_affinity_saved, NULL);
>> -            v->affinity_broken = 0;
>> -
>> -        }
>> -
>>          /*
>> -         * During suspend (in cpu_disable_scheduler()), we moved
>> every vCPU
>> -         * to BSP (which, as of now, is pCPU 0), as a temporary
>> measure to
>> -         * allow the nonboot processors to have their data structure
>> freed
>> -         * and go to sleep. But nothing guardantees that the BSP is
>> a valid
>> -         * pCPU for a particular domain.
>> +         * Re-assign the initial processor as after resume we have
>> no
>> +         * guarantee the old processor has come back to life again.
>>           *
>>           * Therefore, here, before actually unpausing the domains,
>> we should
>>           * set v->processor of each of their vCPUs to something that
>> will
>>           * make sense for the scheduler of the cpupool in which they
>> are in.
>>           */
>>          cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity,
>> -                    cpupool_domain_cpumask(v->domain));
>> -        v->processor = cpumask_any(cpumask_scratch_cpu(cpu));
>> +                    cpupool_domain_cpumask(d));
>> +        if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
>> +        {
>> +            if ( v->affinity_broken )
>> +            {
>> +                sched_set_affinity(v, v->cpu_hard_affinity_saved,
>> NULL);
>> +                v->affinity_broken = 0;
>> +                cpumask_and(cpumask_scratch_cpu(cpu), v-
>>> cpu_hard_affinity,
>> +                            cpupool_domain_cpumask(d));
>> +            }
>>  
> I'm not sure I follow this; I'll think deeper, but, in the meanwhile...
> 
>> -        spin_unlock_irq(lock);
>> +            if ( cpumask_empty(cpumask_scratch_cpu(cpu)) )
>> +            {
>> +                printk(XENLOG_DEBUG "Breaking affinity for %pv\n",
>> v);
>> +                sched_set_affinity(v, &cpumask_all, NULL);
>> +                cpumask_and(cpumask_scratch_cpu(cpu), v-
>>> cpu_hard_affinity,
>> +                            cpupool_domain_cpumask(d));
>>
> ... what does it mean we're breaking the affinity while trying to
> restore it?
> 
> When/where is this vcpu getting its real affinity back in place then?

This is only happening in case a cpu didn't come up again. There is
no way to restore the real affinity in this case.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.