[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Huge perf degradation from missing xen_tlb_flush_all



Hi,

A customer experienced huge degradation in migration performance moving
from 2.6.32 based dom0 to 2.6.39 based dom0. We tracked it down to
missing xen_tlb_flush_all() in 2.6.39/pv-ops kernel.

To summarize, in 2.6.32,  we had

#define flush_tlb_all xen_tlb_flush_all

As a result, when xen_remap_domain_mfn_range called flush_tlb_all(), 
it made a hypercall to xen: 

void xen_tlb_flush_all(void)
{
        struct mmuext_op op;
        op.cmd = MMUEXT_TLB_FLUSH_ALL;
        BUG_ON(HYPERVISOR_mmuext_op(&op, 1, NULL, DOMID_SELF) < 0);
}

xen optimized IPI to only relevant cpus. But in pvops/2.6.39 kernel,
the flush_tlb_all will IPI each VCPU whethere it's running or not:

void flush_tlb_all(void)
{
        on_each_cpu(do_flush_tlb_all, NULL, 1);
}

This results in each vcpu being scheduled to receive the event channel
at least. With large number of VCPUs the overhead is significant.

It seems the best solution would be to restore xen_tlb_flush_all().

Thoughts?

thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.