[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Linux spin lock enhancement on xen



 On 08/18/2010 07:52 PM, Mukesh Rathor wrote:
>> My view is you should just put any VCPU which has nothing to do to
>> sleep, and let Xen sort out the scheduling of the remainder.
> Agree for the most part. But if we can spare the cost of a vcpu coming
> on a cpu, realizing it has nothing to do and putting itself to sleep, by a
> simple solution, we've just saved cycles. Often we are looking for tiny
> gains in the benchmarks against competition. 

Well, how does your proposal compare to mine?  Is it more efficient?

> Yes we don't want to micromanage xen's schedular. But if a guest knows
> something that the schedular does not, and has no way of knowing it,
> then it would be nice to be able to exploit that. I didn't think a vcpu
> telling xen that it's not making forward progress was intrusive.

Well, blocking on an event channel is a good hint.  And what's more, it
allows the guest even more control because it can choose which vcpu to
wake up when.

> Another approach, perhaps better, is a hypercall that allows to temporarily
> boost a vcpu's priority.  What do you guys think about that? This would
> be akin to a system call allowing a process to boost priority. Or
> some kernels, where a thread holding a lock gets a temporary bump in
> the priority because a waitor tells the kernel to.

That kind of thing has many pitfalls - not least, how do you make sure
it doesn't get abused?  A "proper" mechanism to deal with this is expose
some kind of complete vcpu blocking dependency graph to Xen to inform
its scheduling decisions, but that's probably overkill...

>> I'm not sure I understand this point.  If you're pinning vcpus to
>> pcpus, then presumably you're not going to share a pcpu among many,
>> or any vcpus, so the lock holder will be able to run any time it
>> wants.  And a directed yield will only help if the lock waiter is
>> sharing the same pcpu as the lock holder, so it can hand over its
>> timeslice (since making the directed yield preempt something already
>> running in order to run your target vcpu seems rude and ripe for
>> abuse).
> No, if a customer licences 4 cpus, and runs a guest with 12 vcpus.
> You now have 12 vcpus confined to the 4 physical. 

In one domain?  Why would they do that?

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.