[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4] x86/mwait-idle: re-order state entry/exit code a little


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 22 Feb 2022 10:34:14 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vQZFQ0RKooqx+lq1ZG82aYvUdpOr/u/zmOir6qvsIBA=; b=BRPXJVwPb+onu52ndEbtNd+RK6PxDS/j/l3kw44hwpe8mgWhEzR7KnVVR6dvoSVZHDrIdInKJK5z4U4rjRru0ZJWHbWoWmkA7EarGCi2ow6YrtihdYgxIrysGzC6C5ASsCTfqmwuLLyne2XDO0ysEvxBPIfY+FFL+wp/u+ibuTfEj24B2hkH64rYH/lNklXyatI9aQefh5f2OoSzxEN/fWAV8VTv5WeulsAY/q0KQtNm7lQx45MGdGjfCI1hA348td7saKUq+W6B/KglVi4m8EZKU4PNUMM5ffLtBadUmAJmJ6NVmfpOK/xciiqXNolG2XQo88BVut+ONLHZTeTO2w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LYxP5x4yYuQA3SMXpwRBVTSwh+T0+6xU0RddGZbrnMULXS12/PNx0LlPBbZxP//zdqGeBxG/+yn+z3nevEueqWFcqVD4gp2yWdl32caHfBFYd0v0Qwcn37nldmtaSotLpPFF7Yrk3DNH4DfGAox9dmuB7zG1pKXdKInvQC1XLAa7DanVIPBOwdLEMT6Tc52l+gQtdotfz8IS/davbaZs+NXf2Y6hxcPimvQouSYS8sY228Ah6bcvH5L6jxk9t2hDV6Nyk38Pb4fDFfXNquIJdnJMRN5HQyt29P8ghGsEpyVzwOOdxXlcBY8Bo+jG0O5K/mesaYZ/jTWyT6xJk15EOg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Tue, 22 Feb 2022 09:34:21 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 22.02.2022 10:08, Roger Pau Monné wrote:
> On Fri, Feb 18, 2022 at 09:35:10AM +0100, Jan Beulich wrote:
>> The initial observation is that unlike the original ACPI idle driver we
>> have a 2nd cpu_is_haltable() in here. By making the actual state entry
>> conditional, the emitted trace records as well as the subsequent stats
>> update are at least misleading in case the state wasn't actually entered.
>> Hence they would want moving inside the conditional. At which point the
>> cpuidle_get_tick() invocations could (and hence should) move as well.
>> cstate_restore_tsc() also isn't needed if we didn't actually enter the
>> state.
>>
>> This leaves only the errata_c6_workaround() and lapic_timer_off()
>> invocations outside the conditional. As a result it looks easier to
>> drop the conditional (and come back in sync with the other driver again)
>> than to move almost everything into the conditional.
>>
>> While there also move the TRACE_6D() out of the IRQ-disabled region.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Thanks.

>> --- a/xen/arch/x86/cpu/mwait-idle.c
>> +++ b/xen/arch/x86/cpu/mwait-idle.c
>> @@ -847,26 +847,25 @@ static void mwait_idle(void)
>>  
>>      update_last_cx_stat(power, cx, before);
>>  
>> -    if (cpu_is_haltable(cpu)) {
>> -            if (cx->irq_enable_early)
>> -                    local_irq_enable();
>> +    if (cx->irq_enable_early)
>> +            local_irq_enable();
> 
> Now that I look at this again, we need to be careful with the enabling
> interrupts and the interaction with errata_c6_workaround.  Enabling
> interrupts here could change the result of the check for pending EOIs,
> and thus enter mwait with a condition that could trigger the erratas.
> Hopefully CPUIDLE_FLAG_IRQ_ENABLE is only set for C1.
> 
> It might be prudent to only allow setting CPUIDLE_FLAG_IRQ_ENABLE for
> states <= 2.

Well, the justification for enabling was the low exit time. I don't
expect states > 2 to satisfy this criteria, so I think we're okay
without further precautions added right away.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.