WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Linux spin lock enhancement on xen

To: "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>, "Mukesh Rathor" <mukesh.rathor@xxxxxxxxxx>
Subject: Re: [Xen-devel] Linux spin lock enhancement on xen
From: "Ky Srinivasan" <ksrinivasan@xxxxxxxxxx>
Date: Tue, 17 Aug 2010 08:34:49 -0600
Cc:
Delivery-date: Tue, 17 Aug 2010 07:35:34 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100816183357.08623c4c@xxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20100816183357.08623c4c@xxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

>>> On 8/16/2010 at  9:33 PM, in message
<20100816183357.08623c4c@xxxxxxxxxxxxxxxxxxxx>, Mukesh Rathor
<mukesh.rathor@xxxxxxxxxx> wrote: 
> Hi guys,
> 
> Check out the attached patches. I changed the spin lock semantics so the
> lock contains the vcpu id of the vcpu holding it. This then tells xen
> to make that vcpu runnable if not already running:
> 
> Linux:
>    spin_lock()
>        if (try_lock() == failed)
>            loop X times
>            if (try_lock() == failed)
>                sched_op_yield_to(vcpu_num of holder)
>                start again;
>            endif
>        endif
> 
> Xen:
>      sched_op_yield_to:
>           if (vcpu_running(vcpu_num arg))
>               do nothing
>           else
>               vcpu_kick(vcpu_num arg)
>               do_yield()
>           endif
> 
> 
> In my worst case test scenario, I get about 20-36% improvement when the
> system is two to three times over provisioned. 
> 
> Please provide any feedback. I would like to submit official patch for
> SCHEDOP_yield_to in xen.
While I agree that a directed yield is a useful construct, I am not sure how 
this protocol would deal with ticket spin locks as you would want to implement 
some form of priority inheritance - if the vcpu you are yielding to is 
currently blocked on another (ticket) spin lock, you would want to yield to the 
owner of that other spin lock. Clearly, this dependency information is only 
available in the guest and that is where we would need to implement this logic. 
I think Jan's "enlightened" spin locks implemented this kind of logic.

Perhaps, another way to deal with this  generic problem of  inopportune guest 
preemption might be to coordinate guest preemption - allow the guest to notify 
the hypervisor that it is in a critical section. If the no-preempt guest state 
is set, the hypervisor can choose to defer the preemption by giving the guest 
vcpu in question an additional time quantum to run. In this case, the 
hypervisor would post the fact that a preemption is pending on guest and the 
guest vcpu is expected to relinquish control to the hypervisor as part of 
exiting the critical section. Since guest preemption is not a "correctness" 
issue, the hypervisor can choose to not honor the "no-preempt" state the guest 
may post if the hypervisor detects that the guest is buggy  (or malicious). 
Much of what we have been discussing with "enlightened" spin locks is how to 
recover from the situation that results when we have an inopportune guest 
preemption. The coordinated preemption protocol described here attempts to 
avoid getting into pathological situations. If I recall correctly, I think 
there were some patches for doing this form of preemption management.

Regards,

K. Y
 


> 
> thanks,
> Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel