WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guest time vir

To: <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guest time virtualization
From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Date: Sat, 30 Apr 2005 16:47:17 +0800
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sat, 30 Apr 2005 08:47:19 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: DIscussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVNYTbiS1g8dJcOT9aDwf6uWGSJAw==
Thread-topic: machine ITM = nearest guest ITM vs. full guest time virtualization
Dan:
        For the guest time (vtimer) implementation, the current approach
is to set machine ITM to the nearest guest ITM ( HV next ITM) and set
machine ITC to guest ITC. Yes it may have benefits when the guest domain
# is small, but how about my full virtualization suggestion?
        1: Each VP keeps an internal data structure that including at
least vITM and an offset for guest ITC to machine ITC. This offset is
updated when guest set ITC. (Thus guest ITC = machine ITC + offset)
        2: Each time when guest set ITM with guest ITC < guest ITM and
vITV is enabled, we add a vtime_ac_timer for notification.
        3: When this vtime_ac_time is due, the callback function will
call vcpu_pend_interrupt to pend vTimer IRQ.
        4: In this way the machine ITC/ITM is fully used by HV. 
        5: When this VP is scheduled out, the vtime_ac_time should be
removed to reduce the ac_timer link length and improve scalability.
        6: When the VP is scheduled in, VMM will check if it is due, if
it is due during deschedule time, inject guest timer IRQ. If it is not,
re-add the vtime_ac_timer.

        Pros for current implementation:
        1: Guest timer fired at much accurate time.

        Cons:
        1: Serious scalability issue. If there are 16 VMs running with
each VM has 4 VPs. The current implementation will see 64 times more HV
timer IRQs.
        2: If domain-N set ITC, I am afraid current implementation is
hard to handle.
        3: HV jiffies is hard to track including stime_irq,
get_time_delta() and XEN common macro NOW().

        Pros for full guest time virtualization:
        1: Good scalability. Each LP only see one vtime_ac_time pending
in the ac_time link no matter how many VMs exist.
        2: Seamless for domain0 and domain N.

        Cons:
        1: It may fire a little bit later than the exactly expected
time.



        This approach can also be used for X86 lsapic timer.
Eddie


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guest time virtualization, Dong, Eddie <=