[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/1] xen: move TLB-flush filtering out into populate_physmap



> But since what Dongli cares about at the moment is domain creation, it
> certainly won't hurt to limit this optimization to domain creation time;
> then we can discuss enabling it for ballooning when someone finds it to
> be an issue.

Thank you all very much for the feedback. The current limitation only
impacts vm creation time unless someone would balloon 100+GB memory.

To limit this optimization to domain creation time, how do you think if we
add the following rules to xen and toolstack? Are the rules reasonable?

Rule 1. It is toolstack's responsibility to set the "MEMF_no_tlbflush" bit
in memflags. The toolstack developers should be careful that
"MEMF_no_tlbflush" should never be used after vm creation is finished.

Rule 2. xen should check at hypervisor level that MEMF_no_tlbflush is
allowed to be set only when current populate_physmap operation is initiated
by dom0.  Otherwise, MEMF_no_tlbflush should be masked in memflags if ( d
== curr_d && d->domain_id != 0 ). Therefore, this patch would not impact
the security of memory ballooning operations.

Would you please let me know if the rules above are reasonable? Are there
any suggestions for better solutions? xc_dom_boot_mem_init is very close to
the end of vm creation and I could not get a better solution unless we add
a new XEN_DOMCTL operation to intentionally notify xen when
xc_dom_boot_mem_init is finished so that MEMF_no_tlbflush should no longer
be used.

I copy and paste patches related to the above rules in this email. I will send
out patches for review if the above solution works.


@@ -150,6 +152,11 @@ static void populate_physmap(struct memop_args *a)
                             max_order(curr_d)) )
         return;
 
+    /* MEMF_no_tlbflush is masked out if current populate_physmap operation is
+     * not initiated by dom0 */
+    if ( d == curr_d && d->domain_id != 0 )
+        a->memflags &= ~MEMF_no_tlbflush;
+
     for ( i = a->nr_done; i < a->nr_extents; i++ )
     {
         if ( i != a->nr_done && hypercall_preempt_check() )


@@ -1185,7 +1185,7 @@ static int meminit_pv(struct xc_dom_image *dom)
         xen_pfn_t extents[SUPERPAGE_BATCH_SIZE];
         xen_pfn_t pfn_base_idx;
 
-        memflags = 0;
+        memflags = MEMF_no_tlbflush;
         if ( pnode != XC_NUMA_NO_NODE )
             memflags |= XENMEMF_exact_node(pnode);
 
@@ -1273,7 +1273,7 @@ static int meminit_hvm(struct xc_dom_image *dom)
     int rc;
     unsigned long stat_normal_pages = 0, stat_2mb_pages = 0, 
         stat_1gb_pages = 0;
-    unsigned int memflags = 0;
+    unsigned int memflags = MEMF_no_tlbflush;
     int claim_enabled = dom->claim_enabled;
     uint64_t total_pages;
     xen_vmemrange_t dummy_vmemrange[2];


Dongli Zhang

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.