[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 4/6] xen/cpupool: Create different cpupools at boot time


  • To: Julien Grall <julien@xxxxxxx>
  • From: Luca Fancellu <Luca.Fancellu@xxxxxxx>
  • Date: Wed, 23 Mar 2022 13:58:10 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=F1BtiweWzTnpUxEw8Bn480zDis6VZXEWz48HiPd7NcA=; b=hUqo7WZhkHdb1UVPjrz+tc8sBFemdy0lQcjkzWHkPi4ODAz++ERp/TQjgT+XbmnLrW60mk4Wn+KK232JGuKGhUoMXTQGPHwepr/uxk8mLlAZ96xIBZfH7VDp3Ws4gwkVPCrLgY7Q+HLEC2U78p2aMiDdRjX4q2D89sG6SPTJ0W6C2U4bTkdr/18ZHsN3GSVz1a6ooL9IXO6es/9yuPiwhnDjMKwGXD8xPcakdJACJN53NpAepyS2iPKRgvkICAjdAvisKFcnj4EL7vbUIev02zeW6so3awKUXofdT63fjqG0tfv76jG1VffCd2+Xkpf+i9ieZR/64w+xA64mydfIcQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=k5o1dH/EgSohixPXDTutjqOfkCm8KehoqpYeviNSD9p8PNxxhbmCp/g/j9fXjGaCFzBeFTpJWULMVFJ3LNRoTjy1zT3XilsigWpMF8tRF4c5npr0fkXIwWUjFYKQOFgGI2VGu3Vr+Iv4HsdCDzrmRTamV418dnY3eAHCWGCkTfyqBlVxlvdsBMWr+NKnf0iz30tE2k4UpVslhVYTHolNidJfDaEkQ5tPG0qwEVnZShvIxvXtItJCAHvW+cgRAFhJi7JE44JraCwzgLAmrE7mLTwdlGr3tXKFuiAh6GlZvk89bqA8XAjBzbKRsW7gz/qYpEW6lGztAr+LavUApxwTrg==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Wei Liu <wl@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>
  • Delivery-date: Wed, 23 Mar 2022 13:58:47 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHYOtySiWfuTiIQkkqv44WbBvDftKzFUCoAgASzLoCAAAqAAIABIWCAgABFnICAAZGCAA==
  • Thread-topic: [PATCH v3 4/6] xen/cpupool: Create different cpupools at boot time


> On 22 Mar 2022, at 14:01, Julien Grall <julien@xxxxxxx> wrote:
> 
> Hi,
> 
> On 22/03/2022 09:52, Luca Fancellu wrote:
>>>>> 
>>>>> Can you document why this is necessary on x86 but not on other 
>>>>> architectures?
>>>> Hi Julien,
>>>> I received the warning by Juergen here: 
>>>> https://patchwork.kernel.org/comment/24740762/ that at least on x86 there 
>>>> could be
>>>> some problems if cpu0 is not in cpupool0, I tested it on arm and it was 
>>>> working fine and I didn’t find any restriction.
>>> 
>>> What exactly did you test on Arm?
>>> 
>> I have tested start/stop of some guest, moving cpus between cpupools, 
>> create/destroy cpupools, shutdown of Dom0
>> [ from your last mail ]
>>>>> 
>>>>> If dom0 must run on core0 and core0 is Little then you cannot build a 
>>>>> system where dom0 is running on big cores.
>>>>> If the limitation is not there, you can build such a configuration 
>>>>> without any dependency to the boot core type.
>>>> This might not be completely clear so let me rephrase:
>>>> In the current system:
>>>> - dom0 must run on cpupool-0
>>> 
>>> I don't think we need this restriction. In fact, with this series it will 
>>> become more a problem because the cpupool ID will based on how we parse the 
>>> Device-Tree.
>>> 
>>> So for dom0, we need to specify explicitely the cpupool to be used.
>>> 
>>>> - cpupool-0 must contain the boot core
>>>> - consequence: dom0 must run on the boot core
>>>> If boot core is little, you cannot build as system where dom0 runs only on 
>>>> the big cores.
>>>> Removing the second limitation (which is not required on arm) is making it 
>>>> possible.
>>> 
>>> IMHO removing the second restriction is a lot more risky than removing the 
>>> first one.
>> I see your point, my concern about moving Dom0 on another cpupool, different 
>> from cpupool0, is that we give the
>> opportunity to destroy the cpupool0 (we can’t let that happen), or remove 
>> every cpu from cpupool0.
> 
> From my understanding a cpupool can only be destroyed when there are no more 
> CPUs in the pool. Given that cpu0 has to be in pool0 then this should prevent 
> the pool to be destroyed.
> 
> Now, it is quite possible that we don't have a check to prevent CPU0 to be 
> removed from cpupool0. If so, then I would argue we should add the check 
> otherwise it is pointless to prevent cpu0 to be initially added in another 
> pool than pool0 but can be moved afterwards.
> 

Hi Julien,

I’ve done a test on fvp, first finding is that cpu0 can be removed from Pool-0, 
there is no check.
Afterwards I’ve created another pool and I’ve assigned a cpu to it, I’ve called 
xl cpupool-destroy and the tool removes every cpu from the pool before 
destroying.

Do you think the check that prevents CPU0 to be removed from Pool-0 should be 
done in the tools or in Xen?

With this change it could be possible to protect cpu0:

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index a6da4970506a..703005839dd6 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -585,6 +585,12 @@ static int cpupool_unassign_cpu(struct cpupool *c, 
unsigned int cpu)
     if ( !cpu_online(cpu) )
         return -EINVAL;
 
+    if ( !c->cpupool_id && !cpu )
+    {
+        debugtrace_printk("Cpu0 must be in pool with id 0.\n");
+        return -EINVAL;
+    }
+
     master_cpu = sched_get_resource_cpu(cpu);
     ret = cpupool_unassign_cpu_start(c, master_cpu);
     if ( ret )

Cheers,
Luca


> Cheers,
> 
> -- 
> Julien Grall


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.