WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] big local array in routine in hypervisor

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, "Xen-Devel (E-mail)" <xen-devel@xxxxxxxxxxxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] big local array in routine in hypervisor
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Tue, 27 Jan 2009 09:17:56 +0000
Cc:
Delivery-date: Tue, 27 Jan 2009 01:17:44 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C5A47D26.21DBA%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcmAV+8tXLbh+UmsmkaALSk1PFpVyQABXPw6AACwIUc=
Thread-topic: [Xen-devel] big local array in routine in hypervisor
User-agent: Microsoft-Entourage/12.15.0.081119
On 27/01/2009 08:58, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:

>> So you want us to wastefully pre-reserve some space for you, but call it the
>> 'stack' to assuage your guilt? ;-) It's common practice not to have very
>> large 
>> stacks in kernels, since pre-reservation is wateful, dynamic growth is not
>> necessarily feasible to implement, and kernel code doesn't tend to need lots
>> of local storage or recursion. In Linux you'd be limited to 4kB, and there's
>> a 
>> lot more code there living under that stricter regime.
> 
> I noticed that the p2m populate-on-demand code also allocates a lot (10kB) of
> stack (in fact this is a bug since the stack is only 8kB!). If these new stack
> users aren't easy to implement in other ways, and are definitely not reentrant
> nor execute in interrupt context (so we know there's only one such big
> allocation at a time) we could perhaps double the primary stack size to 16kB,
> or even to 32kB.
> 
> It's a slippery slope though, determining how much stack is enough and how big
> a local array is too big. I generally end up having to check and fix big stack
> frames from time to time, and I'm not sure that even doubling the stack a few
> times would avoid that job!

George,

In the PoD case I think only p2m_pod_zero_check_superpage() needs to be
changed. I'm not actually clear why a separate sweep needs to be done for
guest superpage mappings? Couldn't they get handled by p2m_pod_zero_check()?
Can you not PoD-reclaim a subset of MFNs backing a guest super-GFN?

In any case it could map the individual pages one after the other and check
them. That would reduce pressure on map_domain_page() too (currently it
could probably fail and crash the system on x86_32). And then the 512-entry
arrays would not be needed.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel