[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [PATCH 03 of 10 v2] xen: sched_credit: let the scheduler know about node-affinity
- To: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
- From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
- Date: Thu, 20 Dec 2012 09:25:58 +0100
- Cc: Marcus Granado <Marcus.Granado@xxxxxxxxxxxxx>, Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, Ian Campbell <Ian.Campbell@xxxxxxxxxx>, Anil Madhavapeddy <anil@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxxxxx>, Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxx, Jan Beulich <JBeulich@xxxxxxxx>, Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>, Matt Wilson <msw@xxxxxxxxxx>
- Delivery-date: Thu, 20 Dec 2012 08:26:31 +0000
- Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=GLijQwrtNsfK8NeLH8rr1TI+1H3OW+vyBcfa+pfhPOdnIBJBsXr7c/jY vkLQuN1qOqFzxkOdhDUTiXuh23gb0Jb2uWJ6h1R3MdnxF5ZFrV4u3bavW 1Ux/S5LGlZ3DTweRwM3QTsD2fBwxkEvP+i2uP+F8gzhsEOMEMNVVH4Lhj 9vdUAWrbfU0fWHZ2bZ/OiDWikzsOoltHsWh8/evlnN8q5RSOWzWQBH3Zz jGQuisIlOrHUVYTcEbKXLhdEGdmsL;
- List-id: Xen developer discussion <xen-devel.lists.xen.org>
Am 20.12.2012 09:16, schrieb Dario Faggioli:
On Thu, 2012-12-20 at 07:44 +0100, Juergen Gross wrote:
Am 19.12.2012 20:07, schrieb Dario Faggioli:
[...]
This change modifies the VCPU load balancing algorithm (for the
credit scheduler only), introducing a two steps logic.
During the first step, we use the node-affinity mask. The aim is
giving precedence to the CPUs where it is known to be preferable
for the domain to run. If that fails in finding a valid PCPU, the
node-affinity is just ignored and, in the second step, we fall
back to using cpu-affinity only.
Signed-off-by: Dario Faggioli<dario.faggioli@xxxxxxxxxx>
---
Changes from v1:
* CPU masks variables moved off from the stack, as requested during
review. As per the comments in the code, having them in the private
(per-scheduler instance) struct could have been enough, but it would be
racy (again, see comments). For that reason, use a global bunch of
them of (via per_cpu());
Wouldn't it be better to put the mask in the scheduler private per-pcpu area?
This could be applied to several other instances of cpu masks on the stack,
too.
Yes, as I tired to explain, if it's per-cpu it should be fine, since
credit has one runq per each CPU and hence runq lock is enough for
serialization.
BTW, can you be a little bit more specific about where you're suggesting
to put it? I'm sorry but I'm not sure I figured what you mean by "the
scheduler private per-pcpu area"... Do you perhaps mean making it a
member of `struct csched_pcpu' ?
Yes, that's what I would suggest.
Juergen
--
Juergen Gross Principal Developer Operating Systems
PBG PDG ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|