[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] linux/i386: relax highpte pinning

  • To: Jan Beulich <jbeulich@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxxxxxxxx>
  • Date: Wed, 17 Jan 2007 14:51:36 +0000
  • Delivery-date: Wed, 17 Jan 2007 06:51:13 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acc6RvxlOvSMVKY6EduIGwAX8io7RQ==
  • Thread-topic: [Xen-devel] [PATCH] linux/i386: relax highpte pinning

I like the cleanups in this patch, including making PageForeign a generic
page flag (which it does indeed have to become).

However I'm fundamentally confused about why you believe highptes have to be
explicitly pinned. They will be pinned automatically by mm_pin() when it
pins the pgdir. The act of populating a pmd entry of a pinned mm will cause
the pte page to be pinned also -- you don't need to do it explicitly.

So I'm not sure if you're working around an issue I haven't foreseen or if
there's simply some confusion around this issue.

 -- Keir

On 16/1/07 12:56, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

> Just like making lowmem page tables read-only as late as possible, defer the
> pinning of highmem page tables.
> The use of PG_arch_1 for both PG_foreign and PG_pinned is no longer possible
> now, therefore the patch also splits off PG_foreign to use a separate bit.
> At once, the patch fixes a bug uncovered by the original highpte patch, in
> that
> using virt_to_page() or pte_offset_kernel() in the context of pte handling is
> inappropriate.
> The modifications to pmd_populate() include quite a bit of cleanup, resulting
> in
> the new macro being only slightly larger than the old one (otherwise it would
> have about doubled in size).
> Finally (and I'm not insisting on this part, but I think it is appropriate),
> it adds
> PG_foreign and PG_pinned to the set of flags checked for during page
> allocation and freeing.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.