This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-ia64-devel] RE: rid virtualization

To: "Magenheimer, Dan \(HP Labs Fort Collins\)" <dan.magenheimer@xxxxxx>
Subject: [Xen-ia64-devel] RE: rid virtualization
From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Date: Sat, 3 Sep 2005 09:13:14 +0800
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sat, 03 Sep 2005 01:11:12 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-topic: rid virtualization

Magenheimer, Dan (HP Labs Fort Collins) wrote:
>>      During early this year, I remembered we ever talked about VHPT
>> locality issues. The conclusion is that if the rid is
>> randomly allocated
>> the VHPT entry will be fairly evenly distributed. After that I
>> noticed that you added vmMangleRID() to try to make the rid as
>> random as possible. The VTI code has similar code too at that time
>> to switch rid bits. 
>>      I also did a measurement at that time (in VTI environment
>> excluding metaphysical map entries) and find a disappoint result that
>> almost 70-80% of VHPT entries are invalid while the left 20-30% hot
>> entries has long collision chains (Some even has 30+ entries in chain
>> vs. average 1). That reminds me to think of RID
>> virtualization to solve
>> this problem thoroughly and now I am planning to do that covering
>> both global VHPT and per VP VHPT although it is still in design
>>      phase. What is your suggestion on that? Or is there anybody else
>> already thought of this?
> Hi Eddie --
> First question: Will the VHPT distribution problem still exist when
> we are running multiple domains?
I think so. For global VHPT case, multiple domain impose entries from
different guest. 
The only difference is that that guest has high bits rid difference.

> Second question: Can the problem be fixed by improving the "mangling"
> code?  (I picked up this code from vBlades, but never really did
> a thorough analysis that it provided a good distribution.)
VTI code ever did this by choising different swap algorithm, but no
siginificant difference, 
they all are in 20-30%. 
> Third question: If we go to a new "random rid distribution" model,
> can this be designed with very little memory usage and ensure
> that "garbage collection" is efficient when domains migrate very
> dynamically?  I'd be concerned if, for example, we kept a 2^24 map
> of what domain owns what rid.
Yes, memory consumption is a concern. We are paying for the lunch. The
exact size of 
g2m_rid_map will depend on the VHPT size. The entry numbers in VHPT and
should be same.
Different aproach exist here for g2m_rid_map, we can choice a global
map, per domain map 
or per VP map. And the rid recycle can be eagle or lazy. For global map,
vcpu migration 
doesn't have impact on that, but for per VP g2m_rid_map + eagle rid
reuse policy, vcpu migration
 needs to recycle all rids used by this VP.
To solve your concern, a global g2m_rid_map may be the 1st choice
although our design should
cover more complicate situation.
> I'd be eager to fix this problem as it may be contributing to
> the (small but still larger than I expected) slowdown running
> on top of Xen.
Me too. :-)
> Also, I'm fairly sure that the code to walk the collision
> chains in assembly has never been enabled?
It is enabled in VTI branch previously, do you want us to move that to
non VTI branch too?
> Dan


Xen-ia64-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>