[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/HVM: convert most remaining hvm_funcs hook invocations to alt-call


  • To: Andrew Cooper <amc96@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 30 Nov 2021 15:03:47 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mgIvEDRPymYqiCULnMzgDLE+GveIcMHqTNRvZc57TYs=; b=mYovv8PMqHhCvYCBbgqOLDqbciQ2OSqaU829TIq5vLo9CdM018LFT6iOkA45Ej58kmffB+RKPXx0dISObXHlSdR6godIHVaBcyrX19+OZHuTet5BSEwEY6gXwCqB8D+WSWWX0mBE3vaTfj2UbqkFINko8Jz/UfqyqjvVZuBFF1xK95hJX9eo8D5ub7CFw88sFXLs2TODeYt6Fc+JfENzcVQzFBegaIIGfI+sg29hVchMl5afMM1MNSra/wKubsuGfgMfqPUyJPH7XOJcww8buyDez6Zul4WHEfFYTh/ghX6K3ssQ5WPN/ub63c9nxXdHmLv8M4virvLReKLsEh9rAA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lj+kJ9SbeBOtpAjF7XHwdsm9gpdlkq2SHKriaYudikOtpgpPbKD3ok/KxvKhfob4LnHHvAEAamxPta/sMl4rlIadw5tIg2PWhP5Q9ccc0jTWRAvKQOcI/w0kcJ2Nns0H60vqIPvfmYcUEqNtAStyjfnShVN3eCflcGRN7KhwNYKVCMiFYBPJtpnTi/czewjqGEtFVq9lVo0rR3dBj6/TT7k67TXd8rZmkzrcbPz2iqd5V7T0SF0cUnjvcVINgn66jOgwi+Pd7RyQ820RkeYkNTq+2xDrYfROrx4UuBx1xcD9/MXafOPxfvBbEhTc95J6neTcphsJnhoxRhK7iCM5bw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>, Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>, Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 30 Nov 2021 14:04:07 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 30.11.2021 14:48, Andrew Cooper wrote:
> On 29/11/2021 09:04, Jan Beulich wrote:
>> The aim being to have as few indirect calls as possible (see [1]),
>> whereas during initial conversion performance was the main aspect and
>> hence rarely used hooks didn't get converted. Apparently one use of
>> get_interrupt_shadow() was missed at the time.
>>
>> While I've intentionally left alone the cpu_{up,down}() etc hooks for
>> not being guest reachable, the nhvm_hap_walk_L1_p2m() one can't
>> currently be converted as the framework supports only up to 6 arguments.
>> Down the road the three booleans perhaps want folding into a single
>> parameter/argument.
> 
> To use __initdata_cf_clobber, all hooks need to use altcall().

Right, but that's not going to be sufficient: The data members then also
need to move elsewhere, aiui.

> There is also an open question on how to cope with things such as the
> TSC scaling hooks, which are only conditionally set in {vmx,svm}_hvm_funcs.

Why's that an open question? The requirement is that the pointers be
set before the 2nd pass of alternatives patching (it's really just
one: setup()). That's already the case, or else the hook couldn't be
invoked via altcall. And that's also not the only hook getting set
dynamically.

> As for >6 arguments, we really should discourage such functions
> generally, because spilling parameters to the stack is not a efficient
> thing to do in the slightest.

I was thinking so too, but didn't want to fold the change in here.

>> While doing this, drop NULL checks ahead of .nhvm_*() calls when the
>> hook is always present. Also convert the .nhvm_vcpu_reset() call to
>> alternative_vcall(), as the return value is unused and the caller has
>> currently no way of propagating it.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>.

Thanks, but as per below further changes may be done here.

>  However...
> 
>>
>> [1] https://lists.xen.org/archives/html/xen-devel/2021-11/msg01822.html
>> ---
>> Another candidate for dropping the conditional would be
>> .enable_msr_interception(), but this would then want the wrapper to also
>> return void (hence perhaps better done separately).
> 
> I think that's a side effect of Intel support being added first, and
> then an incomplete attempt to add AMD support.
> 
> Seeing as support isn't disappearing, I'd be in favour of reducing it to
> void.  The sole caller already doesn't check the return value.
> 
> 
> If I do a prep series sorting out nhvm_hap_walk_L1_p2m() and
> enable_msr_interception(), would you be happy rebasing this patch and
> adjusting every caller, including cpu_up/down() ?

Sure. I don't think cleaning up enable_msr_interception() is a prereq
though. The potential for doing so was merely an observation while
going through the hook uses.

With it not being sufficient to convert the remaining hook invocations
and with the patch already being quite large, I wonder though whether
it wouldn't make sense to make further conversions the subject of a
fresh patch. I could commit this one then with your R-b (and further
acks, once they have trickled in) once the tree is fully open again.

>> --- a/xen/arch/x86/monitor.c
>> +++ b/xen/arch/x86/monitor.c
>> @@ -270,7 +270,8 @@ int arch_monitor_domctl_event(struct dom
>>          ad->monitor.descriptor_access_enabled = requested_status;
>>  
>>          for_each_vcpu ( d, v )
>> -            hvm_funcs.set_descriptor_access_exiting(v, requested_status);
>> +            alternative_vcall(hvm_funcs.set_descriptor_access_exiting, v,
>> +                              requested_status);
> 
> (For a future change...)  It occurs to me that this wants to be:
> 
> for_each_vcpu ( d, v )
>     v->arch.hvm.recalc_intercepts = true;
> 
> and leave the resume path to do the right thing.
> 
> While it's not a big deal for AMD, avoiding the line of VMCS loads on
> Intel really is a big deal.

Ah, yes. But as you say, at some point in the future.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.