xen-ia64-devel
RE: [Xen-ia64-devel] Xen is now runnnig at Bull
Dan:
See my comments :-)
Magenheimer, Dan (HP Labs Fort Collins) wrote:
> Actually, it is conceivable that a single processor
> machine would want to support more than 64 VMs so
> it is very likely that a 48 processor machine would.
> The limit of 64 is on the number of domains that
> might be active, not necessarily simultaneously active.
>
> I don't think rid virtualization helps here as there
> are only 2**24 rids available and every domain will
> use as many as it is given. And there's no way to garbage
> collect rids from a domain. So -- I think -- the only ways
> to support more than 64 domains are:
> 1) Give each domain fewer than 2**18 rids, or
> 2) Flush the TLB whenever "necessary" when switching domains
The Intanium architecture guarantee that the guest will have at least
18 bits of rid. Given less than 18 bits of rid will violate the sepc
and
guest behavior especially for non modified guest that Bull is trying.
I am not sure how you can flushing TLB without big impact to
performance,
and how do you determine condition of "necessary" here?
>
> The frequency of "necessary" could be reduced by using
> a processor affinity hierarchy of some kind. I suppose
> this is a form of rid virtualization, but is more like
> an extension (e.g. like PAE).
>
> Dan
>
RID virtualization targets to solve 2 problems:
1: VHPT locality issue.
Matt's mangling algorithm may can problem for single VM case if
all itanium
processor use same has algorithm, but still has problem in multiple VM
situation.
(The high bits of different VM doesn't participate the hash algorithm)
2: guest get full 24 bits of rid.
The way to garbage rid is not very complicate. :-(
When the guest rid is not used anymore, the physical rid should be
garbaged.
A simple way to determine when the guest rid is not used can be done by
looking
at the VHPT table and machine TLB side.
(Yes, a reference needs to be introduced for each guest rid to track the
entries
in VHPT, and a special indicator in guest tlb to indicating if there is
entry in physical
TLB)
Is it time for me to share what we thought of for rid virtualization in
much details?
Eddie
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull, (continued)
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull, Tian, Kevin
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull, Dong, Eddie
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull, Xu, Anthony
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull, Tian, Kevin
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull, Tian, Kevin
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull, Dong, Eddie
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull, Magenheimer, Dan (HP Labs Fort Collins)
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull,
Dong, Eddie <=
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull, Dong, Eddie
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull, Magenheimer, Dan (HP Labs Fort Collins)
- RE: [Xen-ia64-devel] Xen is now runnnig at Bull, Dong, Eddie
|
|
|