WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: HVM guests and pvlocks not working as expected

To: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: HVM guests and pvlocks not working as expected
From: Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
Date: Mon, 05 Sep 2011 17:43:23 +0000
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 05 Sep 2011 10:44:25 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <alpine.DEB.2.00.1109021831280.12963@kaball-desktop>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4E5F5B12.4080206@xxxxxxxxxxxxx> <alpine.DEB.2.00.1109021258260.12963@kaball-desktop> <alpine.DEB.2.00.1109021831280.12963@kaball-desktop>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.20) Gecko/20110805 Lightning/1.0b2 Thunderbird/3.1.12
On 02.09.2011 17:34, Stefano Stabellini wrote:
> On Fri, 2 Sep 2011, Stefano Stabellini wrote:
>> On Thu, 1 Sep 2011, Stefan Bader wrote:
>>> After much joy with this, I thought I post this to a bigger audience. After
>>> having migrated to Xen 4.1.1, booting HVM guests had several issues. Some
>>> related to interrupts not being set up correctly (which Stefano has posted
>>> patches) and even with those 3.0 guests seem to hang for me while 2.6.38 or
>>> older kernels were ok.
>>>
>>> After digging deeply into this, I think I found the issue. However, if that 
>>> is
>>> true, it seems rather lucky if pv spinlocks in HVM worked for anybody.
>>>
>>> The xen_hvm_smp_init() call will change the smp_ops hooks. One of which is
>>> smp_prepare_cpus. This is done in start_kernel and at this point in time, 
>>> there
>>> is no change to the pv_lock_ops and they point to the ticket versions.
>>> Later in start_kernel, check_bugs is called and part of that takes the
>>> pv_lock_ops and patches the kernel with the correct jumps.
>>> _After_ that, the kernel_init is called and that in turn does the
>>> smp_prepare_cpus which now changes the pv_lock_ops again, *but not* run any
>>> patching again. So the _raw_spin_*lock calls still use the ticket calls.
>>>
>>> start_kernel
>>>   setup_arch -> xen_hvm_smp_init (set smp_ops)
>>>   ...
>>>   check_bugs -> alternative_instructions (patches pv_locks sites)
>>>   ...
>>>   rest_init (triggers kernel_init as a thread)
>>>     kernel_init
>>>       ...
>>>       smp_prepare_cpus (calls xen_init_spinlocks through smp_ops hook)
>>>
>>> To make things even more fun, there is the possibility that some spinlock
>>> functions are forced to be inlined and others are not 
>>> (CONFIG_INLINE_SPIN_*). In
>>> our special case only two versions of spin_unlock were inlined. Put that
>>> together into a pot with modules, shake well, and there is the fun. 
>>> Basically on
>>> load time, the non-inline calls remain pointing to the unmodified ticket
>>> implementation (main kernel). But the unlock calls (which are inlined) get
>>> modified because the loaded module gets patched up. One can imagine that 
>>> this
>>> does not work too well.
>>>
>>> Anyway, even without the secondary issue, I am sure that just replacing the
>>> functions in pv_lock_ops without the spinlock calls getting actually 
>>> modified is
>>> not the intended behaviour.
>>>
>>> Unfortunately I have not yet been able to make it work. Any attempt to move
>>> xen_init_spinlocks to be called before check_bugs or adding a call to
>>> alternative_instructions results in another hang on boot. At least the 
>>> latter
>>> method results in a more usable dump for crash, which shows that on 
>>> spinlock was
>>> taken (slow) and two spurious taken ones (this is more to play for me).
>>>
>>> But then, maybe someone has some ideas there...
>>
>> we have two possible ways to solve this problem:
>>
>> 1) we wait for Jeremy's pv ticketlock series to be applied, that should
>> make the datastructure internally used by native ticketlock and pv
>> tickerlock identical, so a lock taken by the native ticket spin lock
>> function can be freed by the xen pv ticket spin unlock function.
> 
> this simple patch is enough to make pv ticketlocks work correctly on
> top of Jeremy's pv ticketlocks series
> (cover.1314922370.git.jeremy.fitzhardinge@xxxxxxxxxx):
> 

Right, given that the switch to (or actually patching the code the first time)
ticket spinlock happens after the arch setup has been done, it sounds reasonable
to assume that no spinlock has been used, yet.

Just when I did this without the ticketlocks patches from Jeremy it failed. I
will try it again with them. Just with Plumbers and some other stuff this week I
don't know how quickly I can do that.

-Stefan

> ---
> 
> xen: initialize PV spinlocks before patching is done
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> 
> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> index 3061244..04a84e8 100644
> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -518,7 +518,6 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int 
> max_cpus)
>       if (!xen_have_vector_callback)
>               return;
>       xen_init_lock_cpu(0);
> -     xen_init_spinlocks();
>  }
>  
>  static int __cpuinit xen_hvm_cpu_up(unsigned int cpu)
> @@ -546,4 +545,5 @@ void __init xen_hvm_smp_init(void)
>       smp_ops.cpu_die = xen_hvm_cpu_die;
>       smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
>       smp_ops.send_call_func_single_ipi = 
> xen_smp_send_call_function_single_ipi;
> +     xen_init_spinlocks();
>  }
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>