[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate



Dave Winchell wrote:
> Dong, Eddie wrote:
> 

>> 
>> That is possible, So we should increase 1000 to be more bigger.
>> Make it to be around 10s should be OK?
>> 
>> 
>> 
> Agreed.

Thanks! And will wait for your patches :-)

>> 
>> Just curious: why you favor PIT instead of HPET?
>> Does HPET bring more deviation?
>> 
>> 
> We started with pit because it kept such good time for
> 32 bit Linux. Based on this, we thought that
> the problems with 64bit pit would be manageable.
> 
> One of these days we will characterize HPET.
> Based on rtc performing well, I would think that HPET would do
> well too.
> If not, then the reasons could be investigated.

Yes!

> 
>> 
>> If we rely on guest to pick up the lost ticks, why not just do it
>> thoroughly? i..e even deschedule missed ticks can rely on guest to
>> pick up. 
>> 
>> 
> I have considered this. I was worried that if the descheduled period
> was too large that the guest would do something funny, like
> declare lost
> to be 1 ;-)
> However, the descheduled periods are probably no longer than the
> interrupts disabled periods, given some of the problems we have with
> guests in spinlock_irq code. Also, since we have the Linux guest code,
> and have been relying on being able to read it to make
> timekeeping policy,
> we can see that they don't set lost to 1.
> 
> Actually, the more I think about this, the more I like the idea.
> It would mean that we wouldn't have to deliver all those pent up
> interrupts to the guest. It solves some other problems as well.
> We could probably use this policy for most guests and timekeeping
> sources. Linux 32bit with pit might be the exception.

Great!

Eddie

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.