[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] introduce GFN notification for translated domains



Hi Jan,

On 14/11/2019 16:43, Jan Beulich wrote:
In order for individual IOMMU drivers (and from an abstract pov also
architectures) to be able to adjust, ahead of actual mapping requests,
their data structures when they might cover only a sub-range of all
possible GFNs, introduce a notification call used by various code paths
potentially installing a fresh mapping of a never used GFN (for a
particular domain).

If I understand this correctly, this is mostly targeting IOMMNU driver where page-table are not shared with the processor. Right?


Note that before this patch, in gnttab_transfer(), once past
assign_pages(), further errors modifying the physmap are ignored
(presumably because it would be too complicated to try to roll back at
that point). This patch follows suit by ignoring failed notify_gfn()s or
races due to the need to intermediately drop locks, simply printing out
a warning that the gfn may not be accessible due to the failure.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
v2: Introduce arch_notify_gfn(), to invoke gfn_valid() on x86 (this
     unfortunately means it and notify_gfn() now need to be macros, or
     else include file dependencies get in the way, as gfn_valid() lives
     in paging.h, which we shouldn't include from xen/sched.h). Improve
     description.

TBD: Does Arm actually have anything to check against in its
      arch_notify_gfn()?

I understand that we want to keep the code mostly generic, but I am a bit concerned of the extra cost to use notify_gfn() (and indirectly iommu_notify_gfn()) for doing nothing.

I can't see any direct use of this for the foreseable future on Arm. So could we gate this under a config option?


--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -173,7 +173,8 @@ static int __init pvh_populate_memory_ra
              continue;
          }
- rc = guest_physmap_add_page(d, _gfn(start), page_to_mfn(page),
+        rc = notify_gfn(d, _gfn(start + (1UL << order) - 1)) ?:
+             guest_physmap_add_page(d, _gfn(start), page_to_mfn(page),
                                      order);
          if ( rc != 0 )
          {
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4304,9 +4304,17 @@ static int hvmop_set_param(
          if ( a.value > SHUTDOWN_MAX )
              rc = -EINVAL;
          break;
+
      case HVM_PARAM_IOREQ_SERVER_PFN:
-        d->arch.hvm.ioreq_gfn.base = a.value;
+        if ( d->arch.hvm.params[HVM_PARAM_NR_IOREQ_SERVER_PAGES] )
+            rc = notify_gfn(
+                     d,
+                     _gfn(a.value + d->arch.hvm.params
+                                    [HVM_PARAM_NR_IOREQ_SERVER_PAGES] - 1));
+        if ( !rc )
+             d->arch.hvm.ioreq_gfn.base = a.value;
          break;
+
      case HVM_PARAM_NR_IOREQ_SERVER_PAGES:
      {
          unsigned int i;
@@ -4317,6 +4325,9 @@ static int hvmop_set_param(
              rc = -EINVAL;
              break;
          }
+        rc = notify_gfn(d, _gfn(d->arch.hvm.ioreq_gfn.base + a.value - 1));
+        if ( rc )
+            break;
          for ( i = 0; i < a.value; i++ )
              set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
@@ -4330,7 +4341,11 @@ static int hvmop_set_param(
          BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN >
                       sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
          if ( a.value )
-            set_bit(a.index, &d->arch.hvm.ioreq_gfn.legacy_mask);
+        {
+            rc = notify_gfn(d, _gfn(a.value));
+            if ( !rc )
+                set_bit(a.index, &d->arch.hvm.ioreq_gfn.legacy_mask);
+        }
          break;
case HVM_PARAM_X87_FIP_WIDTH:
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -946,6 +946,16 @@ map_grant_ref(
          return;
      }
+ if ( paging_mode_translate(ld) /* && (op->flags & GNTMAP_host_map) */ &&

I think this wants an explanation in the code why the check is commented.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.