WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Cpu pools discussion

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Cpu pools discussion
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Thu, 30 Jul 2009 07:46:38 +0200
Cc: Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>, George Dunlap <dunlapg@xxxxxxxxx>, Zhigang Wang <zhigang.x.wang@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 29 Jul 2009 22:47:02 -0700
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1248932918; x=1280468918; h=from:sender:reply-to:subject:date:message-id:to:cc: mime-version:content-transfer-encoding:content-id: content-description:resent-date:resent-from:resent-sender: resent-to:resent-cc:resent-message-id:in-reply-to: references:list-id:list-help:list-unsubscribe: list-subscribe:list-post:list-owner:list-archive; z=From:=20Juergen=20Gross=20<juergen.gross@xxxxxxxxxxxxxx> |Subject:=20Re:=20[Xen-devel]=20Cpu=20pools=20discussion |Date:=20Thu,=2030=20Jul=202009=2007:46:38=20+0200 |Message-ID:=20<4A7133BE.4040905@xxxxxxxxxxxxxx>|To:=20Ke ir=20Fraser=20<keir.fraser@xxxxxxxxxxxxx>|CC:=20Tim=20Dee gan=20<Tim.Deegan@xxxxxxxxxxxxx>,=20=0D=0A=20George=20Dun lap=20<dunlapg@xxxxxxxxx>,=0D=0A=20Zhigang=20Wang=20<zhig ang.x.wang@xxxxxxxxxx>,=20=0D=0A=20"xen-devel@xxxxxxxxxxx urce.com"=20<xen-devel@xxxxxxxxxxxxxxxxxxx>|MIME-Version: =201.0|Content-Transfer-Encoding:=207bit|In-Reply-To:=20< C6960674.10E13%keir.fraser@xxxxxxxxxxxxx>|References:=20< C6960674.10E13%keir.fraser@xxxxxxxxxxxxx>; bh=jQfu9Pqhs6GXShbowSQ+Ic1kfcM+Rzb/ByZM+mpifAA=; b=BYcu1hMR2pjP1j7ex4FmE0FTjr5DK4uiRqVxHENGJLU8SzEjOlNodf4/ Vgv1am/nyzMG0DrHLXURPGxE44DU9/mxTxd3J8gduXXKwgnhYsJ3pMWte gyyyw6YL3z7txy9lpZHBrADyoyapme0ibPkH42Q39W2c2xyktCdB8BXor u4tKDJcPbcrTGITJzWaPtgbDVWxCkH42xeWSejYN/YsbQhO1qy+nyNRCw b43arQ8eZuHtAp4TO4Kj91oAjwSrn;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:X-Enigmail-Version:Content-Type: Content-Transfer-Encoding; b=SX5PHhoxTMdOkKm9V+h99+lKJzR2d0/ki4bMTyHms0XuLqHZ4WLj9AfM xrh0NihccHbmGoUAOd0mSA2BglBXlJDHWBclTyK3PdlwzaZ7c4bEzNTsY ARqxvsAFjpKl1KPpNGxc50eFPPEYNbS9Z2jXHJaNywdhI9kFtfLB3+c5u 9A7eXeKjyEchBWvtQs1+zEYTDXlxJ3ygXxy57S42R5Prahxjjc35qa8Ms flxL1lN2zIxmVa3EIJwmgQBAxd3Xe;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C6960674.10E13%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <C6960674.10E13%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla-Thunderbird 2.0.0.22 (X11/20090707)
Keir Fraser wrote:
> On 29/07/2009 13:33, "Juergen Gross" <juergen.gross@xxxxxxxxxxxxxx> wrote:
> 
>>>> Would you feel better if I'd try to eliminate the reason for 
>>>> cpupool_borrow?
>>>> This function is needed only for continue_hypercall_on_cpu outside of the
>>>> current pool. I think it should be possible to replace those by
>>>> on_selected_cpus with less impact on the whole system.
>>> Some of the stuff in the continuation handlers cannot be executed in irq
>>> context. 'Fixing' that would make many of the users ugly and less
>>> maintainable, so getting borrow/return right is the better answer I think.
>> The alternative would be a tasklet set up in irq.
>> And we are speaking of 3 users.
>> I could try a patch and then we could compare the two solutions. What do you
>> think?
> 
> This would work for a couple of callers, but some really need to be running
> in dom0 context. Or, more precisely, not the context of some other domain
> (softirqs synchronously preempt execution of a vcpu context). This can lead
> to subtle deadlocks, for example in freeze_domains() and in __cpu_die(),
> because we may need the vcpu we have snchronously preempted to make some
> progress for ourselves to be able to get past a spin loop.

Okay.

> Another alternative might be to create a 'hypervisor thread', either
> dynamically, or a per-cpu worker thread, and do the work in that. Of course
> that has its own complexities and these threads would also have their own
> interactions with cpu pools to keep them pinned on the appropriate physical
> cpu. I don't know whether this would really work out simpler.

There should be an easy solution for this: What you are suggesting here sounds
like a "hypervisor domain" similar to the the idle domain, but with high
priority and normally all vcpus blocked.

The interactions of this domain with cpupools would be the same as for the
idle domain.

I think this approach could be attractive, but the question is if the pros
outweigh the cons. OTOH such a domain could open interesting opportunities.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 636 47950
Fujitsu Technolgy Solutions               e-mail: juergen.gross@xxxxxxxxxxxxxx
Otto-Hahn-Ring 6                        Internet: ts.fujitsu.com
D-81739 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel