[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Re: [PATCH] Fixed legacy issues when extends number of vcpus > 32



>> Keir Fraser wrote:
>>> Let me think about these. For patch 1 I think we can perhaps do more
>>> work in the loop which matches vlapic identifiers, and thus avoid
>>> needing a "temporary cpumask" to remember matches. For patch 2 I've
>>> been intending to throw away the VMX VPID logic and share the SVM
>>> logic, as it flushes TLBs no more than the VMX logic and doesn't
>>> suffer the same problems with VPID/ASID exhaustion.
>>
>> We have 2^16 vpids after removing the limit, so it should support 65535 vcpus
>> runing concurrently in a system, so we don't need to consider the exhaustion
>> case from this point of view ?
>
>Why have two sets of logic when one is superior to the other? It doesn't
>make sense. I'll take a look at your patch and apply it for now, however.

On hardware side, the key difference is that VMX VPID space is very large:
2^16 VPIDs, and 0 is reserved for VMX root mode. So 65535 VPIDs can be
assigned to VMX vCPUs. We use a bitmap to manage the VMX VPID space:
we reclaim freed VPIDs and reuse them later globally.

If I understand correctly, Xen manages SVM ASIDs per LP. So Xen needs to
allocate a new ASID on the target LP after each vCPU migration. To accelerate
ASID allocation after each vCPU migration, Xen doesn't use a bitmap to claim
freed ASIDs, while just performs a TLB flush and forces each vCPU on this LP to
regenerate a ASID when ASID exhaustion happens on a LP.

I'd agree that as VPID space is per LP, it's not necessary to be globally 
managed.
If we manage such a big VPID space using a bitmap on each LP, it will require 
quite
a few memory and be inefficient on VPID allocation and reclaim. So probably we
can apply the current ASID allocation approach to VPID assuming VPID exhaustion
will be much less.

On the other side, I can't understand why we need to consider the overflow of
generations?

Thanks!
-Xin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.