WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Cpu pools discussion

To: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Subject: Re: [Xen-devel] Cpu pools discussion
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Tue, 28 Jul 2009 12:15:47 +0200
Cc: George Dunlap <dunlapg@xxxxxxxxx>, Zhigang Wang <zhigang.x.wang@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>
Delivery-date: Tue, 28 Jul 2009 03:16:16 -0700
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1248776116; x=1280312116; h=from:sender:reply-to:subject:date:message-id:to:cc: mime-version:content-transfer-encoding:content-id: content-description:resent-date:resent-from:resent-sender: resent-to:resent-cc:resent-message-id:in-reply-to: references:list-id:list-help:list-unsubscribe: list-subscribe:list-post:list-owner:list-archive; z=From:=20Juergen=20Gross=20<juergen.gross@xxxxxxxxxxxxxx> |Subject:=20Re:=20[Xen-devel]=20Cpu=20pools=20discussion |Date:=20Tue,=2028=20Jul=202009=2012:15:47=20+0200 |Message-ID:=20<4A6ECFD3.4030709@xxxxxxxxxxxxxx>|To:=20Ti m=20Deegan=20<Tim.Deegan@xxxxxxxxxx>|CC:=20Zhigang=20Wang =20<zhigang.x.wang@xxxxxxxxxx>,=20=0D=0A=20George=20Dunla p=20<dunlapg@xxxxxxxxx>,=0D=0A=20"xen-devel@xxxxxxxxxxxxx ce.com"=20<xen-devel@xxxxxxxxxxxxxxxxxxx>,=20=0D=0A=20Kei r=20Fraser=20<Keir.Fraser@xxxxxxxxxxxxx>|MIME-Version:=20 1.0|Content-Transfer-Encoding:=207bit|In-Reply-To:=20<200 90728091929.GI5235@xxxxxxxxxxxxxxxxxxxxx>|References:=20< de76405a0907270820gd76458cs34354a61cc410acb@xxxxxxxxxxxxx m>=09<4A6E492D.201@xxxxxxxxxx>=20<20090728091929.GI5235@y ork.uk.xensource.com>; bh=Hvik84R3kDRn9Z1eWCVKu6ybydOinGQFEdHdr31MMkI=; b=OvZ4jGQMkYAsxhSQIVHH47FmHKul9N7ikciGdV5yxNDC28ah7srtSkja SiRC7VadnWbgGtSTOk1Z9a9hWB/QdgmKtJDOk+XTFh3FVE6EpiQRgle9l 5Z7b8oed2+u6XubJZrVRirGx91luBmpd4X34Jj3CrZR/7gwCtvT+k+a+n aBMBV3t7ztDz9iR+0oU04YfGHUIihDfCUup3YIKUE3St3eWfGRYC28cNW Gqr1f+zfQNSrwwRhVsshN4tidAkfi;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:X-Enigmail-Version:Content-Type: Content-Transfer-Encoding; b=hMUKYaeLXIHG4KaNtB6baqg1xKMoDlw8Jhi8Kajgk/0623OFlKy4LjE+ b4zUTkBxD7RYfpTvVLIxihEQ8G0sz6b9Cj4b+vpEE3snpi38YY8j657Jk +2s1FnP4Qns+8KJU/UhUdcSuwVrwpfPW6AWzEGdFmeHgLlUsbSONakgzB F+2+hJSFXGf3oaS/tOMZ9xpQxiHk0zLV6bRKYQfAhNG5jDMx3wgVn5OAA wcCGKnwkuYBv646KK5KdE0pxQAB0w;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20090728091929.GI5235@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <de76405a0907270820gd76458cs34354a61cc410acb@xxxxxxxxxxxxxx> <4A6E492D.201@xxxxxxxxxx> <20090728091929.GI5235@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla-Thunderbird 2.0.0.22 (X11/20090707)
Tim Deegan wrote:
> At 01:41 +0100 on 28 Jul (1248745277), Zhigang Wang wrote:
>> A usecase from me: I want a pool that passthrough pcpus to the mission
>> critical domains. A scheduling algorithm will map vcpus to pcpus one
>> by one in this pool. That will implement a reliable hard partitioning.
>> although it will lose some benefit of virtualization.
> 
> That's easily done by setting affinity masks in the tools, without
> needing any mechanism in Xen.

More or less.
You have to set the affinity masks for ALL domains to avoid scheduling on the
"special" cpus.
You won't have reliable scheduling weights any more.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 636 47950
Fujitsu Technolgy Solutions               e-mail: juergen.gross@xxxxxxxxxxxxxx
Otto-Hahn-Ring 6                        Internet: ts.fujitsu.com
D-81739 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel