WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 00/04] Kexec / Kdump: Release 20061122 (xen-unsta

To: "Ian Campbell" <Ian.Campbell@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 00/04] Kexec / Kdump: Release 20061122 (xen-unstable-12502)
From: "Magnus Damm" <magnus.damm@xxxxxxxxx>
Date: Wed, 29 Nov 2006 13:30:33 +0900
Cc: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>, Kazuo Moriwaka <moriwaka@xxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Akio Takebe <takebe_akio@xxxxxxxxxxxxxx>, Isaku Yamahata <yamahata@xxxxxxxxxxxxx>, Magnus Damm <magnus@xxxxxxxxxxxxx>, Horms <horms@xxxxxxxxxxxx>
Delivery-date: Tue, 28 Nov 2006 20:30:40 -0800
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=oIGk+erOMpms+UiCgu8//bW49CMg79zh+JcUeXscaxZHhEk73bwi+jBhUMG2QZql+OztDM7lWxw02Prz4uY89dqE/vAwO9WhWw80nslrTro6r/crV4oVdtrfj/i6kcxBjctihDzyZ8GglQGmOmtYGQW/G/JCHZh3qhMVv8HMP9k=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1164738244.3336.214.camel@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20061122071050.24010.92547.sendpatchset@localhost> <1164738244.3336.214.camel@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi Ian,

On 11/29/06, Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx> wrote:
On Wed, 2006-11-22 at 16:10 +0900, Magnus Damm wrote:
> [PATCH 00/04] Kexec / Kdump: Release 20061122 (xen-unstable-12502)

I've been playing a bit more and found a problem.

You define a per CPU variable crash_notes and on crash you loop over
NR_CPUS and clear the notes for CPUS which don't exist. Unfortunately
the percpu regions for CPUs which aren't physically present is returned
to the heap on boot (see percpu_free_unused_areas) -- this means that
you zero out heap pages on crash :-(

Ouch. Let's not do that then. =) I wondered why the data areas were
something else than just zero...

We need unique data areas for each cpu, online or not, and these areas
should be zero if the cpus don't exist. This is because we export the
machine address and size of each note to dom0 which in turn exports
the ranges through /proc/iomem to user space.

In user space the kexec-tool then builds an elf header which points
out where the notes are located in machine address space (using
/proc/iomem). This header is then passed on to the secondary crash
kernel which (for some reason) compacts all per-cpu PT_NOTE program
headers into one which will be present in the final vmcore image. At
this compacting stage we need to have data present for _all_ cpus, and
the data for cpus that are not present should contain just zeros.

This scheme should work for cpu hotplug as well.

You need to use num_{possible,present,online}_cpus() in
machine_crash_kexec() and kexec_get_cpu() instead of NR_CPUS.

But wouldn't that leave us with machine address in /proc/iomem that
point out heap data instead of notes? I think that using and array in
bss is a simple and good solution:

crash_note_t crash_notes[NR_CPUS];

How does that sound? WIth a comment why we are not using per-cpu data of course.

Thanks for reviewing!

/ magnus

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>