WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] pre-reservation of memory for domain creation

To: Jan Beulich <JBeulich@xxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: RE: [Xen-devel] pre-reservation of memory for domain creation
From: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>
Date: Thu, 14 Jan 2010 15:16:20 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 13 Jan 2010 23:17:35 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4B4D8A9402000078000299C3@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4B4CA40C0200007800029657@xxxxxxxxxxxxxxxxxx> <C7724B76.6265%keir.fraser@xxxxxxxxxxxxx> <4B4CAEB9020000780002969A@xxxxxxxxxxxxxxxxxx> <6CADD16F56BC954D8E28F3836FA7ED7112A79326F2@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B4D8A9402000078000299C3@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcqUJde5GKxg8sBVSlebO3XYAC1M+QAs/SgA
Thread-topic: [Xen-devel] pre-reservation of memory for domain creation
Keir and Jan, 

    I am working on the issue "pre-reservation of memory for domain creation". 
Now I have the following findings.

    Currently guest initialization process in xend (XendDomainInfo.py) is: 

    _constructDomain() --> domain_create() --> domain_max_vcpus() ... -->
    _initDomain() --> shadow_mem_control() ...

    In domain_create, previously we reserve 1M memory for domain creation (as 
described in xend comment), and these memory SHOULD NOT related with vcpu 
number. And later, shadow_mem_control() will modify the shadow size to 256 
pages per vcpu (also plus some other values related with guest memory size...). 
Therefore the C/S 20389 which modifies 1M to 4M to fit more vcpu number is 
wrong. I'm sorry for that. 

    Following is the reason why currently 1M doesn't work for big number vcpus, 
as we mentioned, it caused Xen crash.

    Each time when sh_set_allocation() is called, it checks whether 
shadow_min_acceptable_pages() has been allocated, if not, it will allocate 
them. That is to say, it is 128 pages per vcpu. But before we define 
d->max_vcpu, guest vcpu hasn't been initialized, so 
shadow_min_acceptable_pages() always returns 0. Therefore we only allocated 1M 
shadow memory for domain_create, and didn't satisfy 128 pages per vcpu for 
alloc_vcpu().

    As we know, vcpu allocation is done in the hypercall of 
XEN_DOMCTL_max_vcpus. However, at this point we haven't called 
shadow_mem_control() and are still using the pre-allocated 1M shadow memory to 
allocate so many vcpus. So it should be a BUG. Therefore when vcpu number 
increases, 1M is not enough and causes Xen crash. C/S 20389 exposes this issue.

    So I think the right process should be, after d->max_vcpu is set and before 
alloc_vcpu(), we should call sh_set_allocation() to satisfy 128 pages per vcpu. 
The following patch does this work. Is it work for you? Thanks!

Best Regards,
-- Dongxiao


Signed-off-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>

diff -r 13d4e78ede97 xen/arch/x86/mm/shadow/common.c
--- a/xen/arch/x86/mm/shadow/common.c   Wed Jan 13 08:33:34 2010 +0000
+++ b/xen/arch/x86/mm/shadow/common.c   Thu Jan 14 14:02:23 2010 +0800
@@ -41,6 +41,9 @@
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
 
+static unsigned int sh_set_allocation(struct domain *d, 
+                                      unsigned int pages,
+                                      int *preempted);
 /* Set up the shadow-specific parts of a domain struct at start of day.
  * Called for every domain from arch_domain_create() */
 void shadow_domain_init(struct domain *d, unsigned int domcr_flags)
@@ -82,6 +85,12 @@ void shadow_vcpu_init(struct vcpu *v)
     }
 #endif
 
+    if ( !is_idle_domain(v->domain) )
+    {
+        shadow_lock(v->domain);
+        sh_set_allocation(v->domain, 128, NULL);
+        shadow_unlock(v->domain);
+    }
     v->arch.paging.mode = &SHADOW_INTERNAL_NAME(sh_paging_mode, 3);
 }
 
@@ -3099,7 +3108,7 @@ int shadow_enable(struct domain *d, u32 
     {
         unsigned int r;
         shadow_lock(d);                





Jan Beulich wrote:
>>>> "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx> 13.01.10 03:34 >>>
>> If we didn't add this change, as Keir said, Xen will crash during
>> destruction of partially-created domain. 
> 
> If this indeed is reproducible, I think it should be fixed.
> 
>> However I didn't noticed the toolstack and
>> shadow_min_acceptable_pages() side at that time... 
>> For now, should we adjust the shadow pre-alloc size to match
>> shadow_min_acceptable_pages() and modify toolstack accordingly? 
> 
> I would say so, just with the problem that I can't reliable say what
> "accordingly" here would be (and hence I can't craft a patch I can
> guarantee will work at least in most of the cases).  
> 
> And as said before, I'm also not convinced that using the maximum
> possible number of vCPU-s for this initial calculation is really the
> right thing to do.  
> 
> Jan

Attachment: shadow.patch
Description: shadow.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel