[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 05/10] xen: credit2: implement yield()



On Fri, 2016-09-30 at 13:52 +0100, George Dunlap wrote:
> On 30/09/16 03:53, Dario Faggioli wrote:
> > 
> > When a vcpu explicitly yields it is usually giving
> > us an advice of "let someone else run and come back
> > to me in a bit."
> > 
> > Credit2 isn't, so far, doing anything when a vcpu
> > yields, which means an yield is basically a NOP (well,
> > actually, it's pure overhead, as it causes the scheduler
> > kick in, but the result is --at least 99% of the time--
> > that the very same vcpu that yielded continues to run).
> > 
> > Implement a "preempt bias", to be applied to yielding
> > vcpus. Basically when evaluating what vcpu to run next,
> > if a vcpu that has just yielded is encountered, we give
> > it a credit penalty, and check whether there is anyone
> > else that would better take over the cpu (of course,
> > if there isn't the yielding vcpu will continue).
> > 
> > The value of this bias can be configured with a boot
> > time parameter, and the default is set to 1 ms.
> > 
> > Also, add an yield performance counter, and fix the
> > style of a couple of comments.
> > 
> > Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> 
> Hmm, I'm sorry for not asking this earlier -- but what's the
> rationale
> for having a "yield bias", rather than just choosing the next
> runnable
> guy in the queue on yield, regardless of what his credits are?
> 
Flexibility, I'd say. Flexibility of deciding 'how strong' an yield
would be and, e.g., have two different values for two cpupools (not
possible with just this patch, but not a big deal to do in future).

Sure, if we only think at the spinlock case, that's not really
important (if good at all). OTOH, if you think at yielding as a way of
saying: <<hey, I've still got things to do, and I could go on, but if
there's anyone that has something more important, I'm fine letting him
run for a while>>. Well, this implementation gives you a way of
quantifying that "while".

But of course, more flexibility often means more complexity. And in
this case, rather than complexity in the code, what would be hard is to
come up with a good value for a specific workload.

IOW, right now we have no yield. Instead of adding a "yield switch",
it's implemented as a "yield knob", which has its up and down sides. I
personally like knobs a lot more than switches... But I see the risk of
people starting to turn the knob, expecting wonders, and being
disappointed (and complaining!) if things don't improve for them! :-P

> If snext has 9ms of credit and the top runnable guy on the runqueue
> has
> 7.8ms of credit, doesn't it make sense to run him instead of making
> snext run again and burn his credit?
> 
Again, in the one use case for which yield is most popular (the
spinlock one) this that you say totally makes sense. Which makes me
think that, even if we were to keep (or go back to) using the bias, I'd
probably go with a default value higher than 1ms worth of credits.

*ANYWAY*, you asked for a rationale, this is mine. But all this being
said, though, I honestly think the simple solution you're hinting at is
better, at least for now. Also, I've just tried to implement it, and
yes, it works by doing as you suggest here, and the code is simpler.

Therefore, I'm going for that. :-)

I've just seen you have applied most of the series already. I'll send a
v3 consisting only of the remaining patches, with this one modified as
suggested.

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.