xen-ia64-devel
RE: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guesttime
Magenheimer, Dan (HP Labs Fort Collins) wrote:
> I think you misunderstand the current Xen/ia64 timer
> implementation. It is a bit different than Xen/x86
Did you check the vcpu_set_itc()? The current implementation set the
guest ITC to machine ITC and adjust guest ITM relatively. How can this
works for multiple domain?
> as ac_timer is not needed for guests.
What are you mentioning here? I use ac_timer mechanism to fire guest
vtimer only.
>
> Your example of 16 VMs running each with 4 VPs doesn't
> result in 64x timer IRQs because a guest can only be
> delivered a timer tick when it is running and can only
It is really hard to know how can current approach support multiple VMs.
If you only set machine ITM to nearest guest vitm or HV next itm. How
can you support domain N vtimer? Suppose when domain N is switched out
when vITCn < vITMn and is switched back at vITCn >vITM. What will you
do? Inject immediately? I saw problems here except you know this guest
is waiting for vITM interrupt.
> change ITM when it is running. Also, I think SMP
> OS's generally choose a single processor to handle clock ticks
> rather than have each processor get interrupted. Thus
Will other LP not use ITC timer? Or your xenolinux only use ITC timer in
BSP? (But yes only BSP account for clock ticks)
> the timer should fire at most twice as frequently as
> the maximim frequency of Xen and all the domains.
>
> E.g. In the current implementation, each Linux domain
> asks for 1024 ticks/second and Xen itself asks for
> 1024 ticks/second. (The frequency for Xen is probably too
> high but that's what its set to right now.) No matter
> how many domains are running, the timer will fire at
> most 2048/second.
>
> If the guest sets ITC, an offset is used as you suggest
> in your proposal. I don't think this is implemented
> yet (because Linux doesn't set ITC).
>
> On rereading your proposal, I'm not sure I see how it is
> different from the current implementation, other than that
> you use the ac_timer queue to call vcpu_pend_interrupt
> and the current implementation uses ITM directly, keeping
> track of whether the next tick was for the domain or
> for Xen itself.
>
>> -----Original Message-----
>> From: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
>> [mailto:xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf
>> Of Dong, Eddie
>> Sent: Saturday, April 30, 2005 2:47 AM
>> To: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>> Cc: xen-devel
>> Subject: [Xen-ia64-devel] machine ITM = nearest guest ITM vs.
>> full guesttime virtualization
>>
>> Dan:
>> For the guest time (vtimer) implementation, the current approach
>> is to set machine ITM to the nearest guest ITM ( HV next ITM) and set
>> machine ITC to guest ITC. Yes it may have benefits when the
>> guest domain
>> # is small, but how about my full virtualization suggestion?
>> 1: Each VP keeps an internal data structure that including at
>> least vITM and an offset for guest ITC to machine ITC. This offset is
>> updated when guest set ITC. (Thus guest ITC = machine ITC + offset)
>> 2: Each time when guest set ITM with guest ITC < guest ITM and
>> vITV is enabled, we add a vtime_ac_timer for notification.
>> 3: When this vtime_ac_time is due, the callback function will
>> call vcpu_pend_interrupt to pend vTimer IRQ.
>> 4: In this way the machine ITC/ITM is fully used by HV.
>> 5: When this VP is scheduled out, the vtime_ac_time should be
>> removed to reduce the ac_timer link length and improve scalability.
>> 6: When the VP is scheduled in, VMM will check if it is due, if
>> it is due during deschedule time, inject guest timer IRQ. If
>> it is not,
>> re-add the vtime_ac_timer.
>>
>> Pros for current implementation:
>> 1: Guest timer fired at much accurate time.
>>
>> Cons:
>> 1: Serious scalability issue. If there are 16 VMs running with
>> each VM has 4 VPs. The current implementation will see 64
>> times more HV
>> timer IRQs.
>> 2: If domain-N set ITC, I am afraid current implementation is
>> hard to handle.
>> 3: HV jiffies is hard to track including stime_irq,
>> get_time_delta() and XEN common macro NOW().
>>
>> Pros for full guest time virtualization:
>> 1: Good scalability. Each LP only see one vtime_ac_time pending
>> in the ac_time link no matter how many VMs exist.
>> 2: Seamless for domain0 and domain N.
>>
>> Cons:
>> 1: It may fire a little bit later than the exactly expected
>> time.
>>
>>
>>
>> This approach can also be used for X86 lsapic timer.
>> Eddie
>>
>>
>> _______________________________________________
>> Xen-ia64-devel mailing list
>> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-ia64-devel
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- RE: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guesttime virtualization, Magenheimer, Dan (HP Labs Fort Collins)
- RE: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guesttime virtualization,
Dong, Eddie <=
- RE: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guesttime virtualization, Magenheimer, Dan (HP Labs Fort Collins)
- RE: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guesttime virtualization, Dong, Eddie
- RE: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guesttime virtualization, Magenheimer, Dan (HP Labs Fort Collins)
- RE: [Xen-ia64-devel] machine ITM = nearest guest ITM vs. full guesttime virtualization, Dong, Eddie
|
|
|