[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7/7] xen: Convert __page_to_mfn and __mfn_to_page to use typesafe MFN



Hi Andrew,

On 05/10/17 00:27, Andrew Cooper wrote:
On 04/10/2017 19:15, Julien Grall wrote:
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 093ebf1a8e..0753d03aac 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -104,11 +104,11 @@ static bool insert_11_bank(struct domain *d,
                             unsigned int order)
  {
      int res, i;
-    paddr_t spfn;
+    mfn_t smfn;
      paddr_t start, size;
- spfn = page_to_mfn(pg);
-    start = pfn_to_paddr(spfn);
+    smfn = page_to_mfn(pg);
+    start = mfn_to_maddr(smfn);
      size = pfn_to_paddr(1UL << order);

Wouldn't it be cleaner to move this renaming into patch 1, along with an
extra set of undef/override, to be taken out here?  (perhaps not given
the rework effort?)

I moved the clean-up to patch #1 and add a temporary override that will be dropped in this patch.


D11PRINT("Allocated %#"PRIpaddr"-%#"PRIpaddr" (%ldMB/%ldMB, order %d)\n",
@@ -678,8 +678,8 @@ static void pvpmu_finish(struct domain *d, xen_pmu_params_t 
*params)
if ( xenpmu_data )
      {
-        mfn = domain_page_map_to_mfn(xenpmu_data);
-        ASSERT(mfn_valid(_mfn(mfn)));
+        mfn = _mfn(domain_page_map_to_mfn(xenpmu_data));

Seeing as you convert every(?) call to domain_page_map_to_mfn(), it
would be cleaner to change the return type while making the change. >
I'd be happy for such a change being folded into this patch, because
doing so would be by far the least disruptive way of making the change.

All of them but one are turned to _mfn(domain_page_map_to_mfn()). I have folded the conversion in this patch.


+        ASSERT(mfn_valid(mfn));
          unmap_domain_page_global(xenpmu_data);
          put_page_and_type(mfn_to_page(mfn));
      }
@@ -1185,8 +1180,8 @@ int __mem_sharing_unshare_page(struct domain *d,
          return -ENOMEM;
      }
- s = map_domain_page(_mfn(__page_to_mfn(old_page)));
-    t = map_domain_page(_mfn(__page_to_mfn(page)));
+    s = map_domain_page(page_to_mfn(old_page));
+    t = map_domain_page(page_to_mfn(page));
      memcpy(t, s, PAGE_SIZE);
      unmap_domain_page(s);
      unmap_domain_page(t);

This whole lot could turn into copy_domain_page()

I have added a patch at the beginning of the series to use copy_domain_page.


diff --git a/xen/arch/x86/pv/descriptor-tables.c 
b/xen/arch/x86/pv/descriptor-tables.c
index 81973af124..371221a302 100644
--- a/xen/arch/x86/pv/descriptor-tables.c
+++ b/xen/arch/x86/pv/descriptor-tables.c
@@ -25,12 +25,6 @@
  #include <asm/p2m.h>
  #include <asm/pv/mm.h>
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
  /*******************
   * Descriptor Tables
   */

If you're making this change, please take out the Descriptor Tables
comment like you do with I/O below, because the entire file is dedicated
to descriptor table support and it will save me one item on a cleanup
patch :).

It is dropped now.


diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index dd90713acf..9ccbd021ef 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -43,16 +43,6 @@
  #include "emulate.h"
  #include "mm.h"
-/* Override macros from asm/page.h to make them work with mfn_t */
-#undef mfn_to_page
-#define mfn_to_page(mfn) __mfn_to_page(mfn_x(mfn))
-#undef page_to_mfn
-#define page_to_mfn(pg) _mfn(__page_to_mfn(pg))
-
-/***********************
- * I/O emulation support
- */
-
  struct priv_op_ctxt {
      struct x86_emulate_ctxt ctxt;
      struct {
@@ -873,22 +873,22 @@ int kimage_build_ind(struct kexec_image *image, unsigned 
long ind_mfn,
      for ( entry = page; ;  )
      {
          unsigned long ind;
-        unsigned long mfn;
+        mfn_t mfn;
ind = kimage_entry_ind(entry, compat);
-        mfn = kimage_entry_mfn(entry, compat);
+        mfn = _mfn(kimage_entry_mfn(entry, compat));

Again, modify the return type of kimage_entry_mfn() ?

I have added a patch at the beginning of the series to switch kimage/kexec to mfn_t.


diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 45ca742678..8737ef16ff 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -273,8 +273,8 @@ void copy_page_sse2(void *, const void *);
  #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
  #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
  #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
-#define vmap_to_mfn(va)     l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va)))
-#define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
+#define vmap_to_mfn(va)     _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned 
long)(va))))

l1e_get_mfn(*virt_to_xen_l1e((unsigned long)(va)))

+#define vmap_to_page(va)    __mfn_to_page(vmap_to_mfn(va))
#endif /* !defined(__ASSEMBLY__) */ diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 542c0b3f20..8516a0b131 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -25,7 +25,7 @@
  typedef uint32_t pagesize_t;  /* like size_t, must handle largest PAGE_SIZE */
#define IS_PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), PAGE_SIZE)
-#define IS_VALID_PAGE(_pi)    mfn_valid(_mfn(page_to_mfn(_pi)))
+#define IS_VALID_PAGE(_pi)    mfn_valid(page_to_mfn(_pi))

/sigh  This is tautological.  The definition of a "valid mfn" in this
case is one for which we have frametable entry, and by having a struct
page_info in our hands, this is by definition true (unless you have a
wild pointer, at which point your bug is elsewhere).

IS_VALID_PAGE() is only ever used in assertions and never usefully, so
instead I would remove it entirely rather than trying to fix it up.

I would be happy to remove IS_VALID_PAGE is a patch at the beginning of the series if Konrad is happy with it.

I will probably send a new version today without IS_VALID_PAGE dropped. Though I will mention it in the patch to get feedback.


As for TMEM itself (Julien: This my no means blocks the patch.  It is
more an observation for Konrad to see about fixing), I see that TMEM is
broken on x86 machines with more than 5TB of RAM, because it is not
legal to call page_to_virt() on a struct page_info allocated from the
domheap (which is why alloc_xenheap_page() returns a void *, and
alloc_domheap_page() specifically doesn't).  The easy fix for this is to
swap the allocation primitives over to using xenheap allocations, which
would remove the need for page_to_virt() and back, or a better fix would
be to not pass everything thing by virtual address (at which point
retaining use of the domheap is fine).

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.