[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 2/4] x86/hvm: Disable cross-vendor handling in #UD handler


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Alejandro Vallejo <alejandro.garciavallejo@xxxxxxx>
  • Date: Wed, 11 Mar 2026 13:40:44 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VkIQXLNOHmFB9zDorOhqVCMM0hyznygcYm0FFlY59vk=; b=o0yK55K08Ag2vHYZGVT4sALGePyTTg6pv7ZSuFdzePPwtPXcPq4xzO5jdpNur+xXCKJrki18HPRodbMaDsbAOW33+y8ZHJHstjPzrAM8VLjoEq2XWLhKZgXO0AUryaA1Vu3p1B7eEkDhf0jIDWbYwcb+7c+7kdfbjF9O+b7oaZ463C2lG3T2xTYAOiKs7Q3UwtJa/CX3M10jISccsrh9T/FSKdkkO6eJVnKv5sbbJEyHmSsN8wPoLov/2Pt2iEtYzE9NB8koXtAJ6JfPsXGMh9XVreA6A7QWFexB9GEihTwzy2QxDh9Fi1A21+ltkBvZaaGgyL/JTXTghZIlOD4O+w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=AuCMEP9XpWU3ctKIjFlHgv8tUHBD2D577Y0Lf5B/eh1aMV5ERR8mBIWsOpeGvpcpD5CWZiwU3VWaRDyduuuDeVo57M7b1wiEw5J1FmpDmwj/qqGox8g0jPw0elVQ9SvEbCxpFEXzQoLdAEXSFhtK2UKVtit9LYfpDejJ/jjVtfWBhuVMle+fWcXrGtUtShC7PoG4fsC/GeRwrUztoM11J90Q4+dx4p8+i5Cy79XJey97GoMNJBMI3zaSjGbAt34fWi1wZMnO0s+d69nz0LARpB4SqtvJPbQNyj9hMzKua803g975jjNgAVsc+6K0vPosjTSja+qFFVMR3taSTECgOQ==
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Jason Andryuk <jason.andryuk@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 11 Mar 2026 12:41:11 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed Mar 11, 2026 at 12:06 PM CET, Jan Beulich wrote:
> On 11.03.2026 11:21, Alejandro Vallejo wrote:
>> On Wed Mar 11, 2026 at 10:30 AM CET, Jan Beulich wrote:
>>> On 11.03.2026 10:25, Alejandro Vallejo wrote:
>>>> On Wed Mar 11, 2026 at 9:35 AM CET, Jan Beulich wrote:
>>>>> On 13.02.2026 12:42, Alejandro Vallejo wrote:
>>>>>> -    if ( opt_hvm_fep )
>>>>>> -    {
>>>>>> -        const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
>>>>>> -        uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>>>>>> -                         ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>>>>>
>>>>> Why is this initializer not retained?
>>>>
>>>> It is, it's just that the diff is terrible. An unfortunate side effect of 
>>>> the
>>>> removal of the braces. The scope collapsing forces it on top of the 
>>>> function,
>>>> before the emulation context is initialised.
>>>>
>>>> It's set up in steps. walk is unconditionally initialised as isnsn_fetch, 
>>>> and
>>>> later (after emulate_init_once()), OR'd with PFEC_user_mode for DPL == 3. 
>>>> See...
>>>>
>>>>>
>>>>>> -        unsigned long addr;
>>>>>> -        char sig[5]; /* ud2; .ascii "xen" */
>>>>>> -
>>>>>> -        if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
>>>>>> -                                        sizeof(sig), 
>>>>>> hvm_access_insn_fetch,
>>>>>> -                                        cs, &addr) &&
>>>>>> -             (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
>>>>>> -                                         walk, NULL) == HVMTRANS_okay) 
>>>>>> &&
>>>>>> -             (memcmp(sig, "\xf\xb" "xen", sizeof(sig)) == 0) )
>>>>>> -        {
>>>>>> -            regs->rip += sizeof(sig);
>>>>>> -            regs->eflags &= ~X86_EFLAGS_RF;
>>>>>> +    hvm_emulate_init_once(&ctxt, NULL, regs);
>>>>>>  
>>>>>> -            /* Zero the upper 32 bits of %rip if not in 64bit mode. */
>>>>>> -            if ( !(hvm_long_mode_active(cur) && cs->l) )
>>>>>> -                regs->rip = (uint32_t)regs->rip;
>>>>>> +    if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
>>>>>> +        walk |= PFEC_user_mode;
>>>>
>>>> ... here.
>>>
>>> But that's the point of my question: Why did you split it? All you mean to
>>> do is re-indentation.
>> 
>> Because I need to declare "walk" ahead of the statements. Thus this...
>> 
>>     uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>>                      ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>> 
>> must (by necessity) have the declaration placed on top before the emulator
>> context initialisation. The options are...
>> 
>>     uint32_t walk;
>>     [... lines ...]
>>     walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
>>             ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>> 
>> ... or...
>> 
>>     uint32_t walk = PFEC_insn_fetch;
>>     [... lines ...]
>>     if ( ctxt.seg_reg[x86_seg_ss].dpl == 3 )
>>         walk |= PFEC_user_mode;
>> 
>> Line count remains at 3 in both cases, but in the former case there's a
>> comparison, a ternary operator and an OR all adding cognitive load to the
>> same statement. In the latter case there's an assignment in the 1st 
>> statement,
>> an if+comparison in a separate line, and a separate OR in the final 
>> statement.
>> It's just simpler to meantally parse because the complexity is evenly
>> distributed.
>> 
>> I can see how the current form was preferred to avoid a third line (and
>> then a forth due to the required newline, doubling the total). But with the
>> rearrangement that's no longer relevant.
>> 
>> If you have a very strong preference for the prior form I could keep it, 
>> though
>> I do have a preference myself for the latter out of improved readability.
>
> Strong preference or not - readability is subjective. I prefer the present
> form, where the variable obtains it final value right away. More generally,
> with subjective aspects it may often be better to leave mechanical changes
> (here: re-indentation) as purely mechanical. Things are different with
> objective aspects, like style violations which of course can (and imo
> preferably should) be corrected on such occasions.

Ack

Cheers,
Alejandro



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.