WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

[Xen-ia64-devel] RE: Implementing both (was: Xen/ia64 - global or per VP

To: "Magenheimer, Dan \(HP Labs Fort Collins\)" <dan.magenheimer@xxxxxx>
Subject: [Xen-ia64-devel] RE: Implementing both (was: Xen/ia64 - global or per VP VHPT)
From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Date: Thu, 5 May 2005 07:46:13 +0800
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 04 May 2005 23:46:00 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: DIscussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVPb3JXMihcD89IQYupUNJEM3YWVAAEihcQAAFcjdAABKuvsAAVXHAgAAAcwiAAMEVo4AAFW6ygAA6n57A=
Thread-topic: Implementing both (was: Xen/ia64 - global or per VP VHPT)
Hi, dan:
        Thanks for sharing what you are thinking. See my comments.
Magenheimer, Dan (HP Labs Fort Collins) wrote:
> Thanks for the explanation.  Will the foreignmap only
> be needed for Domain0 then?  How frequent will
> the foreignmap be used (and will it be used with
> high temporal locality)?  The reason I am asking
> these questions is that what I had planned for
> domain0 to access domU guest-physical addresses
> is as follows:
I would say it will be very frequent as all IOs from other domain 
will go to device model in service domain.
> 
> - Domain0 is currently direct-mapped, meaning it
>   can access any machine-physical address simply
>   by adding a constant (0xf000000000000000) to
>   the machine-physical address.
>   (This is in the current implementation.)
> - Domain0 is trusted.  If domain0 accesses any
>   virtual address between 0xf000000000000000 and
>   0xf100000000000000, the miss handler direct
>   maps this to the corresponding machine-physical
>   address with no restrictions.
>   (This is in the current implementation.)
> - Given the domid and the guest-physical address
>   for any domU, with a simple dom0 hypercall,
>   dom0 can ask for the machine-physical address
>   corresponding to [domid,guest-physical],
>   add 0xf000000000000000 to it and directly
>   access it.  (The hypercall doesn't exist
>   yet, but the lookup mechanism is in the
>   current implementation.)
If domain N physical memory are all contiguous, that will probably
work, but when it is discontiguous, I am afraid it doesn't work. Guess a
device model uses 
rid_x and va from 0 to 64G to map domain N physical memory(HV always
present guest 
contiguous space to domain N), and now the guest physical memory are
machine 
discontiguous. How can device model use the va->gpa+0xf000000000000000
to access?
By the way, we don't want to introduce complexity to device model to
handle the 
discontiguous memory issue (let HV do it) and we'd better keep the
device model no
 change with what it looks in IA32 today.
> 
> As for putting large pages (e.g. 256MB) in the
> VHPT, yes they may take up many (16Kx16K) entries.
> Insertion is done "on demand", meaning each
> 16K page is put in the VHPT when it is accessed
> rather than putting all 16,384 individual
> mappings in the VHPT at once.
If you keep a vTLB for this 256MB map, yes you can do insertion on
demand,
otherwise HV totally lost the information, how can it work ?
Actually insertion on demand is what in my design/implementation now
with the help of vTLB.
 That is one point that I want you to use my current implementation to
save various effort
to move current implemnetation to support all these:)
Yes, I know it is hard for you to make judgement before you see the
code. I hope 
you can get them as early as possible. 
> 
> But this is necessary anyway (at least in the
> current implementation) because a domU guest
> may be entirely fragmented; every 16K of
> guest-physical memory may reside in a different,
> non-contiguous 16K of machine-physical memory.
> And, even worse, this mapping may change
> dynamically because of ballooning (of course
> requiring a TLB/VHPT flush if it changes).
That is no problem with the help of vTLB + PMT(or page table). In my
point of view, VHPT is only an assistance to vTLB for performance.
 So besides VHPT, vTLB is very important in my perspective.
> 
> Dan
> 
Eddie

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel