[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Populate-on-demand memory problem


  • To: Dietmar Hahn <dietmar.hahn@xxxxxxxxxxxxxx>
  • From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
  • Date: Tue, 27 Jul 2010 14:10:57 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 27 Jul 2010 06:11:49 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; b=JFq4NHACgQQKXcZYDYOtCbjQvdW+a+6HwOVVuKL8zuBtK5kpsyot6iezMEGFXJas5l m5YmOA0v16i1P/gVqrkc8N3L0j5X9XmV+KcukS+l1QEpfIKFYKOGE7i7ECzoeYpgkSEI MHaK7jTfxfmKIQPkDK05Pp3hUeppzFpNk/U3o=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hmm, looks like I neglected to push a fix upstream.  Can you test it
with the attached patch, and tell me if that fixes your problem?

 -George

On Tue, Jul 27, 2010 at 8:48 AM, Dietmar Hahn
<dietmar.hahn@xxxxxxxxxxxxxx> wrote:
> Hi list,
>
> we ported our system from Novel SLES11 using xen-3.3 to SLES11 SP1 using
> xen-4.0 and ran into some trouble with the pod stuff.
> We have a HVM guest and already used target_mem < max_mem on startup of
> the guest.
> With the new xen version we get
> (XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! tot_pages 
> 792792 pod_entries 800
> I did some code revisions and looking at pod patches
> (http://lists.xensource.com/archives/html/xen-devel/2008-12/msg01030.html)
> to understand the behavior. We use the following configuration:
> maxmem = 4096
> memory = 3096
> What I see is:
>  - our guest boots with e820 map showing maxmem.
>  - reading xenstore memory/target returns '3170304' means 3096MB, 792576 pages
> Now our guest uses the target memory and gives back 1000MB via
> hypervisor call XENMEM_decrease_reservation to the hypervisor.
>
> Later I try to map the complete domU memory into dom0 kernel space and here I
> get the 'Out of populate-on-demand memory' crash.
>
> As far as I understand (ignoring the p2m_pod_emergency_sweep)
> - on populating a page
>   - the page is taken from the pod cache
>   - p2md->pod.count--
>   - p2md->pod.entry_count--
>   - page gets type p2m_ram_rw
> - decreasing a page
>   - p2md->pod.entry_count--
>   - page gets type p2m_invalid
>
> So if the guest uses all the target memory and gave back all
> the (maxmem-target) memory p2md->pod.count and p2md->pod.entry_count should be
> zero.
> I added some tracing in the hypervisor and see on start of the guest:
> p2m_pod_set_cache_target: p2md->pod.count: 791264 tot_pages: 791744
> This pod.count is lower then the target seen in the guest!
> On the first call of p2m_pod_demand_populate() I can see
> p2m_pod_demand_populate: p2md->pod.entry_count: 1048064 p2md->pod.count: 
> 791264 tot_pages: 792792
> So pod.entry_count=1048064 (4096MB) complies to maxmem but
> pod.count=791264 is lower then the target memory in xenstore.
>
> Any help is welcome!
> Thanks.
> Dietmar.
>
> --
> Company details: http://ts.fujitsu.com/imprint.html
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

Attachment: 20091111-pod-domain-build-math-error.diff
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.