[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Linux PATCH] Fix to hugepages to work around new PWT handling


  • To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
  • From: Dave McCracken <dcm@xxxxxxxx>
  • Date: Thu, 24 Jun 2010 17:38:47 -0500
  • Cc: Xen Developers List <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 24 Jun 2010 15:39:50 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On Thursday, June 24, 2010, Jeremy Fitzhardinge wrote:
> > Um, this is the upper level code.  The entire purpose of make_huge_pte is
> > to  construct a present huge pte from page and pgprot. The problem is
> > that the original code makes the pte, then sets the present bit via
> > pte_mkhuge().  This means the Xen-specific macro that triggers on
> > present is misled and doesn't do the pfn_to_mfn().  Without this patch
> > hugepages is handing pfns to the hypervisor to map instead of mfns.
> >
> >   
> 
> In principle, setting present should cause the pte to be converted from
> pfn to mfn, but I don't think that ever happens with normal ptes (since
> non-present ptes contain swap info).  But I don't see where a huge pte
> gets present set; pte_mkhuge itself doesn't do anything except set PSE.

Wow.  I just dug through the code.  The landscape has sure changed since the 
last time I followed this path.

It used to be that vma->vm_page_prot only contained the various read/write 
flags for that vma.  At that time pte_mkhuge() did in fact add _PAGE_PRESENT|
_PAGE_PSE to the pte.

Now it appears that vma->vm_page_prot does include _PAGE_PRESENT in all its 
various states.  So this part of the patch is in fact unnecessary.

It's what I get for not rechecking my facts to be sure they haven't changed.  
Sorry.

Dave McCracken
Oracle Corp.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.