[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [xen-4.12-testing test] 169199: regressions - FAIL


  • To: Julien Grall <julien@xxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 8 Apr 2022 17:26:15 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=20joXjWoCsFs/bgDV9BSrLGsWkq/pure/OzURUl4wXk=; b=h+uX2gbgJ2sM/n1yKCSW9fjA/Bcf6pIXWqlNUYKc180HV7WWSJqtMRccd8KY8Zskp9QIstJVSxGcrfwFofh/AZOCnvwJxI0onPlQltZIj3kwIOPNaTIOjH9EL4skAI0JtlWMkGQl33hfL12QWVMD7OfRr85ROzEtNEEJ/XFqHzGC2FEfPkup06fx54/blulTe+I5SXC7SzCqdh4s6oEflB9QLlXZPwNHIXG/gggaeXl3BY7m6xPKnj1lXYfl+eJzWr4us30B7lhC5Mv7QmQ0yhRjzdSUR0oG8J770v83J9FGL2arYQ6MLKVYG68t5f7UtwDrkhx/eir94HUmUUnIlA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=c7X2wzsFGmAOjZxUDh5w+1JuCn46hI5ziFgjRci45m46hbCLoUCYzCpSus3asUXbqp9+Ima0xz7QB4XWB17U1fEyATCeB7rPAMXieDTAJ8vis7cgsnS/2yGJukfI8SDGUVvWL+d3ljUxPzBF1CwLVm5aWhRJnzR4HGhIS89R+ujQCSkc3h7VnS6eZql/XOS8ib4b+0V7M/cQP+gHQGi87Vsv/zSiICFBLJcRWK+lOjftnYczSI6ss7SgxLdZkrbkSlQHjU5LxlZDwa4N+53ehvM0PeNcoYZmpjDUT326+zRTF66OaccO5u/tB7GA/B18kkpONN9zBnKDcZg9ZYqpwg==
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "osstest service owner" <osstest-admin@xxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, "Dario Faggioli" <dfaggioli@xxxxxxxx>
  • Delivery-date: Fri, 08 Apr 2022 15:26:39 +0000
  • Ironport-data: A9a23:F/E5CKqcQbXPCsF62QHQwhDius9eBmL2ZRIvgKrLsJaIsI4StFCzt garIBnVO6zYMWX3eYtwaY6+8hsB6MeGmNc1SwY4qSFkESMX9JuZCYyVIHmrMnLJJKUvbq7GA +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrZRbrJA24DjWVvR4 YOq+aUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk b1wWZKMpQgBIYPQv+klSiBkIg5TD7xq25qfECCVrpnGp6HGWyOEL/RGCUg3OcsT+/ptAHEI/ vsdQNwPRknd3aTsmuv9E7QywJR4RCXoFNp3VnVI1zbWAOxgWZnea67L+cVZzHE7gcUm8fP2O ZpDNmIwN02ojxtnIW84KLMXv8GSq3DOURNVthXE9LJu2j2GpOB2+Oe0a4eEEjCQfu1OhVqRr G/C+2X/AzkZOcaZxD7D9Wij7sfQmQvrVYRUE6e3ntZmjlScyW0UDBw+TkagrL+yjUvWc9VEM FAZ4TZrpKQ39UqDXtT7Rwe/onOPolgbQdU4O+8n7ACAzILE7gDfAXILJhZDYtE7sM49RRQxy 0SE2djuAFRHoLCTDH6Q6LqQhTezIjQOa38PYzceSgkI6MWlp5s85i8jVf46TvTz1IesX2itn Xba90DSmon/k+YU+bmc8Gjeug6dibX7XyIWw13mBki6u1YRiJGeW6Sk7l3S7PBlJYmfT0Wcs HVsp/Vy/NziHrnWynXTHbxl8KWBoq/cbWaC2QIH84wJrWzFxpK1QWxHDNiSzm9NO91MRzLma VS7Veh5tM4KZyvCgUOajuuM5yUWIUrISIyNuhP8NIMmjn1NmOmvpnwGiam4hT2FraTUuftjU ap3iO71ZZrgNYxpzSCtW8AW2qIxyyY1yAv7HM6nnk73jOfOOyHMEN/p1WdiiMhjsctoRy2Pr b5i2zaikU0DAIUSnAGJmWLsEbz6BSdiXs2nwyCmXuWCPhBnCAkc5wz5mtscl3het/0NzI/gp yjlMmcBkQaXrSCXeG2iNyE4AJuyDMkXkJ7OFXF1Vbpe8yN4OtjHAWZ2X8ZfQITLA8Q/laUkE KRfI5zo7zYmYm2vxgnxpKLV9eRKXB+qmRiPL2yiZj0+dIRnXAvH5pnveQ6HycXEJnPfWRcWy 1F46j7mfA==
  • Ironport-hdrordr: A9a23:qtspq6rd5o4dZp3+4Ger8jkaV5vPL9V00zEX/kB9WHVpm5Oj+f xGzc516farslossREb+expOMG7MBXhHLpOkPQs1NCZLXXbUQqTXftfBO7ZogEIdBeOk9K1uZ 0QF5SWTeeAcmSS7vyKkDVQcexQuOVvmZrA7Yy1ogYPPGNXguNbnnxE426gYzxLrWJ9dOME/f Snl616T23KQwVoUi33PAhPY8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzyVxA0XXT9jyaortT GtqX212oyT99WAjjPM3W7a6Jpb3PPn19t4HcSJzuwYMC/lhAqEbJloH5eCoDc2iuey70tCqq iHnz4Qe+BIr1/BdGC8phXgnyHmzTYV8nfnjWSVhHPyyPaJMg4SOo5kv8Z0YxHZ400vsJVXy6 RQxV+UsJJREFfpgDn9z8KgbWAlqmOE5V4Z1cIDhX1WVoUTLJVLq5YEwU9TGJAcWArn9YEcFv V0Bs203ocYTbqjVQGYgoBT+q3uYpxqdS32AHTq+/blnwS+pUoJjnfxn6ck7zI9HJFUcegy2w 2LCNUtqFh0dL5lUUtMPpZzfSKJMB25ffvtChPaHb21LtBOB5ryw6SHlIndotvaP6A18A==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Apr 08, 2022 at 12:24:27PM +0100, Julien Grall wrote:
> Hi Roger,
> 
> On 08/04/2022 12:16, Roger Pau Monné wrote:
> > On Fri, Apr 08, 2022 at 12:08:02PM +0100, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 08/04/2022 12:01, Roger Pau Monné wrote:
> > > > > > I could add a suitable dom0_max_vcpus parameter to osstest.  
> > > > > > XenServer
> > > > > > uses 16 for example.
> > > > > 
> > > > > I'm afraid a fixed number won't do, the more that iirc there are
> > > > > systems with just a few cores in the pool (and you don't want to
> > > > > over-commit by default).
> > > > 
> > > > But this won't over commit, it would just assign dom0 16 vCPUs at
> > > > most, if the system has less than 16 vCPUs that's what would be
> > > > assigned to dom0.
> > > 
> > > AFAICT, this is not the case on Arm. If you ask 16 vCPUs, then you will 
> > > get
> > > that number even if there are 8 pCPUs.
> > > 
> > > In fact, the documentation of dom0_max_vcpus suggests that the numbers of
> > > vCPUs can be more than the number of pCPUs.
> > 
> > It was my understanding that you could only achieve that by using the
> > min-max nomenclature, so in order to force 16 vCPUs always you would
> > have to use:
> > 
> > dom0_max_vcpus=16-16
> > 
> > Otherwise the usage of '_max_' in the option name is pointless, and it
> > should instead be dom0_vcpus.
> > 
> > Anyway, I could use:
> > 
> > dom0_max_vcpus=1-16
> > 
> > Which is unambiguous and should get us 1 vCPU at least, or 16vCPUs at
> > most.
> 
> Unfortunately, Arm doesn't support the min-max nomenclature.

Hm, can we update the command line document then?

There's no mention that the min-max nomenclature is only available to
x86. I assume it's not possible to share the logic here so that both
Arm and x86 parse the option in the same way?

> > 
> > But given Jans suggestion we might want to go for something more
> > complex?
> 
> I think we already have some knowledge about each HW (i.e. grub vs uboot) in
> Osstest. So I think it would be fine to extend the knowledge and add the
> number of CPUs.

We don't need to store this information anywhere I think. Since we
first install plain Debian and then install Xen we can always fetch
the number of physical CPUs when running plain Linux and use that to
calculate the amount to give to dom0?

Jan suggested using ceil(sqrt(nr_cpus)).

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.