WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] PoD issue

To: "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx>
Subject: [Xen-devel] PoD issue
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Fri, 29 Jan 2010 15:27:15 +0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 29 Jan 2010 07:27:54 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
George,

before diving deeply into the PoD code, I hope you have some idea that
might ease the debugging that's apparently going to be needed.

Following the comment immediately before p2m_pod_set_mem_target(),
there's an apparent inconsistency with the accounting: While the guest
in question properly balloons down to its intended setting (1G, with a
maxmem setting of 2G), the combination of the equations

  d->arch.p2m->pod.entry_count == B - P
  d->tot_pages == P + d->arch.p2m->pod.count

doesn't hold (provided I interpreted the meaning of B correctly - I
took this from the guest balloon driver's "Current allocation" report,
converted to pages); there's a difference of over 13000 pages.
Obviously, as soon as the guest uses up enough of its memory, it
will get crashed by the PoD code.

In two runs I did, the difference (and hence the number of entries
reported in the eventual crash message) was identical, implying to
me that this is not a simple race, but rather a systematical problem.

Even on the initial dump taken (when the guest was sitting at the
boot manager screen), there already appears to be a difference of
800 pages (it's my understanding that at this point the difference
between entries and cache should equal the difference between
maxmem and mem).

Does this ring any bells? Any hints how to debug this? In any case
I'm attaching the full log in case you want to look at it.

Jan

Attachment: xen.log.1
Description: Binary data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>