[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2



>From:NISHIGUCHI Naoki
>Sent: Thursday, January 15, 2009 10:05 AM
>>      4. issues left:
>>              a. Abrupt glitches are still generated when the 
>QEMU emulated mouse being used and moving mouse quickly in 
>guest A. Passing-through USB mouse/keyboard to guest A, then 
>no glitches.
>
>I also noticed that. Though I don't know the precise cause, I 
>found that 
>dom0 and guest A would consume largely CPU time (hundreds of 
>milliseconds) in such situation. In this case, the priority of 
>dom0 and 
>guest A falls rapidly, then guest B runs until the priority of 
>dom0 and 
>guest A becomes BOOST. In worst case, it will take about 120ms.

I remember that Disheng once told me that BOOST only happens
when vcpu is waken up and its current priority is UNDER. In your
case guest A should be in OVER after running hundreds of ms, 
and then it waits enough long time to become UNDER and then 
BOOST. If this is the case, your enhancement on BOOST level
seems only solving part of the latency issue. Here either assigning
a static priority, or adding more BOOST source (like event, intr,
etc) seems more complete solution.

>
>>              b. vcpu migration. As said before, without vcpu 
>pinned, glitches are obvious.
>
>I think that this issue would be solved by adding the condition for 
>migrating the vcpu.
>e.g. If the vcpu has boost credit, don't migrate the vcpu.

Is it over-kill? how about you already get 3 BOOST vcpu in 
runqueue of current cpu, when other cpus are all running
OVER vcpus? Boost itself looks not the only determinative 
factor for migration, and instead what you concern is the 
relative priority in system wide.

Thanks,
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.