WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] hvm domain crash

To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] hvm domain crash
From: Karl Rister <kmr@xxxxxxxxxx>
Date: Mon, 25 Sep 2006 17:19:56 -0500
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 25 Sep 2006 15:21:28 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C138C8EE.1780%Keir.Fraser@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C138C8EE.1780%Keir.Fraser@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.3
After doing quite a bit of testing I came upon something repeatable.  When 
using 4way with 2GB sometimes it would work and sometimes not.  With 8way 4GB 
it was much more consistent and I was able to narrow it down to a single 
point.  With 3840 MB I can boot without problems, if I increase the memory to 
3841 it will not boot.  Something interesting is that very close (< 6MB over) 
I actually get slightly different progress before crashing as opposed to 
always in the same place.  If I am way over (say I put in 4096) it always 
crashes after the "Freeing unused kernel memory: 200k freed" message.  If I 
am closer (like 3841) I can actually get to the point of seeing a few init 
scripts run before it crashes.  If I am at something like 3844 it tends to 
crash right after udev which immediately follows the "Freeing..." message.

Karl

On Thursday 21 September 2006 4:55 pm, Keir Fraser wrote:
> That's quite a big CR3 value. How much memory does this guest have?
>
>  -- Keir
>
> On 21/9/06 10:56 pm, "Karl Rister" <kmr@xxxxxxxxxx> wrote:
> > (XEN) Invalid CR3 value=10f780000domain_crash_sync called from vmx.c:1679
> > (XEN) Domain 5 (vcpu#1) crashed on cpu#4:
> > (XEN) ----[ Xen-3.0-unstable  x86_64  debug=n  Not tainted ]----
> > (XEN) CPU:    4
> > (XEN) RIP:    0010:[<ffffffff8017680c>]
> > (XEN) RFLAGS: 0000000000000293   CONTEXT: hvm
> > (XEN) rax: 000000010f780000   rbx: 0000000000000001   rcx:
> > 0000000000000000 (XEN) rdx: ffff81010f780000   rsi: 0000000000000000  
> > rdi: ffff81010fc6db5c (XEN) rbp: ffffffff803f3000   rsp: ffff81010fc6fb48
> >   r8:  0000000000000000 (XEN) r9:  0000000000000000   r10:
> > 0000000000000000   r11: 0000000000000000 (XEN) r12: ffff81010fb39a80  
> > r13: 0000000000000000   r14: ffff81010fc6d510 (XEN) r15: ffff81010fc66ac0
> >   cr0: 000000008005003b   cr4: 00000000000006e0 (XEN) cr3:
> > 000000015f4c1000   cr2: 0000000000000000
> > (XEN) ds: 0018   es: 0018   fs: 0000   gs: 0000   ss: 0018   cs: 0010
> >
> > The domain was running with 4 VCPUs and had previously completed the test
> > on a single VCPU and 2 VCPU configurations.  The domain was running a
> > baremetal 2.6.16.29 kernel.  Output from 'xm info' is:

-- 
Karl Rister
IBM Linux Performance Team
kmr@xxxxxxxxxx
(512) 838-1553 (t/l 678)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel