[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [xen-4.12-testing test] 169199: regressions - FAIL


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 8 Apr 2022 11:25:28 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=py4JQSQM5+JO8FvUWTXkGb8C07lU1m8ZN9z+bPH/dc4=; b=AUsjfGTmEnBGBucxr1zhqahQ9nvglprvFzVucRnfAPH26i0fT9k9+IotSTFeTpRloYQECx9M+mpCI20rVDjTOGH7lrcHunwJ/liyZETGF9RsUKIjhfHwnFyJFMB1oIrOvlcZp0jlHeKJnJTAxa3+hvHcHQ95x7vp7/fiyzSw97sZEdoIL+gKLVlyWC9BPtp7dvoqoZDFJR0ZKKbpMgrv/v4v9ShI1BFthMea/6xMlj2uVLA5zyEkMg/yNDkWBHkhmI1b8G3UZScZGhaAs1nzqX+AVUhyF1/NHnHHC+QAZFySsBh7K4eqii2Sc3OB+yXfcz13Npg8w73HphTbo6Xzow==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ie/yLMkk/BItzzrDMxAZ3ZI4/cJTD2b3+pYAMXeolOfRIxdELfu4RMaotDcQt9PfctFEmqwHIQY9JvIcqi2kG0gxwuncCQE17z4nU0w/ctkV9+KjTwUpFHd7wL0XmcGVHt4Ma3CT5wboNoQcS7KgQIr+fHOAHQwpvWOdGsOlEqD8hkSWIttlaayLCoc+KwCG2C6Uvwq1tV3ZgtiQJB9BWbgtiwYOjEPLvC6by8AkN/WVfQwDhVlUay9VK9uiIlEYcOjgKGKNWXNrYTnRCd891qJRDlo2miU5Cd6UeKnBatybvDyQPRZV89ffx7Om5DqSJsidgQuQ2UfMWrofq39wnA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, osstest service owner <osstest-admin@xxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>
  • Delivery-date: Fri, 08 Apr 2022 09:25:46 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 08.04.2022 10:09, Roger Pau Monné wrote:
> On Fri, Apr 08, 2022 at 09:01:11AM +0200, Jan Beulich wrote:
>> On 07.04.2022 10:45, osstest service owner wrote:
>>> flight 169199 xen-4.12-testing real [real]
>>> http://logs.test-lab.xenproject.org/osstest/logs/169199/
>>>
>>> Regressions :-(
>>>
>>> Tests which did not succeed and are blocking,
>>> including tests which could not be run:
>>>  test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail 
>>> REGR. vs. 168480
>>
>> While the subsequent flight passed, I thought I'd still look into
>> the logs here since the earlier flight had failed too. The state of
>> the machine when the debug keys were issued is somewhat odd (and
>> similar to the earlier failure's): 11 of the 56 CPUs try to
>> acquire (apparently) Dom0's event lock, from evtchn_move_pirqs().
>> All other CPUs are idle. The test failed because the sole guest
>> didn't reboot in time. Whether the failure is actually connected to
>> this apparent lock contention is unclear, though.
>>
>> One can further see that really all about 70 ECS_PIRQ ports are
>> bound to vCPU 0 (which makes me wonder about lack of balancing
>> inside Dom0 itself, but that's unrelated). This means that all
>> other vCPU-s have nothing at all to do in evtchn_move_pirqs().
>> Since this moving of pIRQ-s is an optimization (the value of which
>> has been put under question in the past, iirc), I wonder whether we
>> shouldn't add a check to the function for the list being empty
>> prior to actually acquiring the lock. I guess I'll make a patch and
>> post it as RFC.
> 
> Seems good to me.
> 
> I think a better model would be to migrate the PIRQs when fired, or
> even better when EOI is performed?  So that Xen doesn't pointlessly
> migrate PIRQs for vCPUs that aren't running.

Well, what the function does is mark the IRQ for migration only
(IRQ_MOVE_PENDING on x86). IRQs will only ever be migrated in the
process of finishing the handling of an actual instance of the
IRQ, as otherwise it's not safe / race-free.

>> And of course in a mostly idle system the other aspect here (again)
>> is: Why are vCPU-s moved across pCPU-s in the first place? I've
>> observed (and reported) such seemingly over-aggressive vCPU
>> migration before, most recently in the context of putting together
>> 'x86: make "dom0_nodes=" work with credit2'. Is there anything that
>> can be done about this in credit2?
>>
>> A final, osstest-related question is: Does it make sense to run Dom0
>> on 56 vCPU-s, one each per pCPU? The bigger a system, the less
>> useful it looks to me to actually also have a Dom0 as big, when the
>> purpose of the system is to run guests, not meaningful other
>> workloads in Dom0. While this is Xen's default (i.e. in the absence
>> of command line options restricting Dom0), I don't think it's
>> representing typical use of Xen in the field.
> 
> I could add a suitable dom0_max_vcpus parameter to osstest.  XenServer
> uses 16 for example.

I'm afraid a fixed number won't do, the more that iirc there are
systems with just a few cores in the pool (and you don't want to
over-commit by default). While for extreme cases it may not suffice,
I would like to suggest to consider using ceil(sqrt(nr_cpus)). But
of course this requires that osstest has a priori knowledge of how
many (usable) CPUs each system (pair) has, to be able to form such
a system-dependent command line option.

> Albeit not having such parameter has likely led you into figuring out
> this issue, so it might not be so bad.  I agree however it's likely
> better to test scenarios closer to real world usage.

True. One might conclude that we need both then. But of course that
would make each flight yet more resource hungry.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.