[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] pvh dom0: memory leak from iomem map



On Thu, 5 Jun 2014 12:17:54 +0200
Roger Pau Monnà <roger.pau@xxxxxxxxxx> wrote:

> On 05/06/14 01:32, Mukesh Rathor wrote:
> > On Wed, 04 Jun 2014 08:33:59 +0100
> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> > 
> >>>>> On 04.06.14 at 03:29, <mukesh.rathor@xxxxxxxxxx> wrote:
> >>> Hi Tim,
> >>>
> >>> When building a dom0 pvh, we populate the p2m with 0..N pfns
> >>> upfront. Then in pvh_map_all_iomem, we walk the e820 and map all
> >>> iomem 1:1. As such any iomem range below N would cause those ram
> >>> frames to be silently dropped. 
> >>>
> >>> Since the holes could be pretty big, I am concenred this could
> >>> result in significant loss of frames. 
> >>>
> >>> In my very early patches I had:
> >>>
> >>> set_typed_p2m_entry():
> >>> ...
.....
> 
> I'm quite sure I'm missing something here, but I don't see were those 
> pages are removed from the domheap page list (d->page_list). In fact 
> I've created a small debug patch to show that the pages removed by
> the MMIO holes are still in the domheap list:

Ah, I see, you are reusing the pages by snooping into the M2P... 

            if ( get_gpfn_from_mfn(mfn) != INVALID_M2P_ENTRY )
                continue;

I guess that works since the M2P gets invalidated in set_mmio_p2m_entry,
and I think it's guaranteed that INVALID_M2P_ENTRY frame is always free
to be used. 

Ok, we are fine then.

thanks Roger,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.