WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] x86's context switch ordering of operations

To: Jan Beulich <jbeulich@xxxxxxxxxx>
Subject: Re: [Xen-devel] x86's context switch ordering of operations
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Tue, 29 Apr 2008 14:58:39 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 29 Apr 2008 06:59:52 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <48174124.76E4.0078.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AciqASAVXsG+VhX0Ed2WwgAX8io7RQ==
Thread-topic: [Xen-devel] x86's context switch ordering of operations
User-agent: Microsoft-Entourage/11.4.0.080122
On 29/4/08 14:39, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

>> ctxt_switch_{from,to} exist only in x86 Xen and are called from a single
>> hook point out from the common scheduler. Thus either they both happen
>> before, or both happen after, current is changed by the common scheduler. It
> 
> Maybe I'm mistaken (or it is being done twice with no good reason), but
> I see a set_current(next) in x86's context_switch() ...

Um, good point, I'd forgotten exactly how the code fitted together. Anyhow,
the reason you see ctxt_switch_{from,to} happening after set_current() is
because context_switch() and __context_switch() can actually be decoupled.
When switching to the idle vcpu we run context_switch() but we do not run
__context_switch().

> If pages mapped that way survive context switches, then it would
> certainly be possible to map them once and keep them until no longer
> needed. Doing this during context switch was more as an attempt to
> conserve on virtual address use (so other vCPU-s of the same guest
> not using this functionality would have less chances of running out
> of space). The background is that I think that it'll also be necessary
> to extend MAX_VIRT_CPUS beyond 32 at some not too distant point
> (at least in dom0 for CPU frequency management - or do you have
> another scheme in mind how to deal with systems having more than
> 32 CPU threads), resulting in more pressure on the address space.

I'm hoping that Intel's patches to allow uniproc dom0 to perform multiproc
Cx and Px state management will be acceptable. Apart from that, yes we may
have to increase MAX_VIRT_CPUS.

> I know your position here, but - are all 32-on-64 migration/save/restore
> issues meanwhile resolved (that is, can the tools meanwhile deal with
> either size domains no matter whether using a 32- or 64-bit dom0)? If
> not, there may be reasons beyond that of needing vm86 mode that
> might force people to stay with 32-bit Xen. (I certainly agree that there
> are unavoidable limitations, but obviously there is a big difference
> between requiring 64 bytes and 4k per vCPU for this particular
> functionality.)

I don't really see a few kilobytes of overhead per vcpu as very significant.
Given the limitations of the map_domain_page_global() address space, we're
limiting ourselves to probably around 700-800 vcpus. That's quite a lot imo!

I'm not sure on our position regarding 32-on-64 save/restore compatibility.
Tim Deegan made some patches a while ago, but that was mainly focused on
correctly saving 64-bit HVM domUs from a 32-bit dom0. I also know that
Oracle had some patches they floated a while ago. I don;t they ever got
posted for inclusion into xen-unstable though. *However* I do know that I'd
rather we spent time fixing 32-on-64 save/restore compatibility than
fretting about and optimising 32-bit Xen scalability. The former has greater
long-term usefulness.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel