[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] dom0less + sched=null => broken in staging


  • To: "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, "julien.grall@xxxxxxx" <julien.grall@xxxxxxx>
  • From: Dario Faggioli <dfaggioli@xxxxxxxx>
  • Date: Tue, 13 Aug 2019 22:34:10 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=81yrqwDMjAMYHRb2mjk2Uibt4+fpvJNoc8EOjPDI7VQ=; b=VThPrIfBTVGwBrGPHaQuyOVjj/MDeJ4Z7MA0aNopgUF641JwTpLDT/eU7wxtXOdYwH1LhTQy2uRQfOSRLwfbpP+w9JKRvPY0SQwMcFvxx8rOLWhe29PzT9oIq5wfpx8VSsvTGKnSQwBsHyeKFZTuGshwJNAFuZhah6fP68BXdbrc/dFe4i4a+dGK4DLIHXt0usUHZg5arq3Tc0T7oY4fCofU8/HzClF+8vNSqCoqw7GX0MPUOfoTdlk86ZkpxAQqvKPCLE2Qt1jtVT0uyZpf1Jn6UueEaZxh9nbNNJ9ceZYPoTrtqTOdVCRG29NZMIS7Gt13g3T6HHAcKjU2orxueA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PRX+L5gOxNKmtCGPuz0ZJEcxTHWggznQsF84kEDb+93fLPDR+uuqub1wF04Q7yikHiaobCs8Tq9rAlGzQkBzGKC6FmbEuGP/boJECTsmFqhivhTOWFzrvKBQm6EN/OotDwGw4/TnLK+RwV6UwRnxwiM0TVoI3OPj7Vpx/bVzXhZo3Flo6eLiz5pw3bdnJ4yKiCBdsdwGBP3MYhRRhz2b779wZ/P5V1iQbOhTTVtyPPCu0qCzQlJD1vGl219CbciUUCxX3cHcLMsCZLgjl9knyH05ETyQeMzmMmGkQROHulAn4pwivgyHZYNuZVRf13P5LrAiQNS9rbUEZd2+Tvsrwg==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=dfaggioli@xxxxxxxx;
  • Cc: "George.Dunlap@xxxxxxxxxxxxx" <George.Dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 13 Aug 2019 22:36:46 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVTU0h+kfyo+Dyy0KGuLer1mUTwabzJpKlgAYWI4CAABfdQYAAC4UAgAATdqOAAEBIgA==
  • Thread-topic: [Xen-devel] dom0less + sched=null => broken in staging

On Tue, 2019-08-13 at 19:43 +0100, Julien Grall wrote:
> On 8/13/19 6:34 PM, Dario Faggioli wrote:
> > On Tue, 2019-08-13 at 17:52 +0100, Julien Grall wrote:
> > > 
> > So, unless the flag gets cleared again, or something else happens
> > that
> > makes the vCPU(s) fail the vcpu_runnable() check in
> > domain_unpause()->vcpu_wake(), I don't see why the wakeup that let
> > the
> > null scheduler start scheduling the vCPU doesn't happen... as it
> > instead does on x86 or !dom0less ARM (because, as far as I've
> > understood, it's only dom0less that doesn't work, it this correct?)
> 
> Yes, I quickly tried to use NULL scheduler with just dom0 and it
> boots.
> 
Ok.

> Interestingly, I can't see the log:
> 
> (XEN) Freed 328kB init memory.
> 
> This is called as part of init_done before CPU0 goes into the idle
> loop.
> 
> Adding more debug, it is getting stuck when calling 
> domain_unpause_by_controller for dom0. Specifically vcpu_wake on
> dom0v0.
> 
Wait... Is this also with just dom0, or when trying dom0less with some
domUs?

> The loop to assign a pCPU in null_vcpu_wake() is turning into an 
> infinite loop. Indeed the loop is trying to pick CPU0 for dom0v0 that
> is 
> already used by dom1v0. So the problem is in pick_cpu() or the data
> used 
> by it.
> 
Ah, interesting...

> It feels to me this is an affinity problem. Note that I didn't
> request 
> to pin dom0 vCPUs.
> 
Yep, looking better, I think I've seen something suspicious now. I'll
send another debug patch.

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.