On Wed, 2011-06-08 at 22:43 +0100, David Xu wrote:
> Hi George,
>
>
> Thanks for your reply. I have similar ideas to you, adding another
> parameter that indicates the required latency and then letting
> scheduler determine latency characteristics of a VM automatically.
> Firstly, adding another parameter and let users set its value in
> advance sounds similar to SEDF. But sometimes the configuration
> process is hard and inflexible when workloads in VM is complex. So in
> my opinion, a task-aware scheduler is better. However, manually
> configuration can help us to check out the effectiveness of the new
> parameter.
Great! Sounds like we're on the same page.
> For another hand, as you described, it is also not easy and accurate
> to make scheduler determine the latency characteristics of a VM
> automatically with some information we can get from hypervisor, for
> instance the delay interrupt. Therefore, the key point for me is to
> find and implement a scheduling helper to indicate which VM should be
> scheduled soon.
Remember though -- you can't just give a VM more CPU time. Giving a VM
more CPU at one time means taking CPU time away at another time. I
think they key is to think the opposite way -- taking away time from a
VM by giving it a shorter timeslice, so that you can give time back when
it needs it.
> For example, for TCP network, we can implement a tool similar to a
> packet sniffer to capture the packet and analyze its head information
> to infer the type of workload. Then the analysis result can help
> scheduler to make a decision. In fact, not all I/O-intensive workloads
> require low latency, some of them only require high-throughput. Of
> course, scheduling latency impact significantly the throughput (You
> handled this problem with boost mechanism to some extension).
The boost mechanism (and indeed the whole credit1 scheduler) was
actually written by someone else. :-) And although it's good in theory,
the way it's implemented actually causes some problems.
I've just been talking to one of our engineers here who used to work for
a company which sold network cards. Our discussion convinced me that we
shouldn't really need any more information about a VM than the
interrupts which have been delivered to it: even devices which go into
polling mode do so for a relatively brief period of time, then re-enable
interrupts again.
> What I want to is to only reduce the latency of a VM which require low
> latency while postpone other VMs, and use other technology such as
> packet offloading to compensate their lost and improve their
> throughput.
>
>
> This is just my course idea and there are many problems as well. I
> hope I can often discuss with you and share our results. Thanks very
> much.
Yes, I look forward to seeing the results of your work. Are you going
to be doing this on credit2?
Peace,
-George