[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 5/9] x86/vmx: Improvements to LBR MSR handling



>>> On 12.06.18 at 10:51, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 12/06/2018 09:15, Jan Beulich wrote:
>>>>> On 08.06.18 at 20:48, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> @@ -3106,14 +3104,13 @@ static int vmx_msr_write_intercept(unsigned int 
>>> msr, 
> uint64_t msr_content)
>>>              for ( ; (rc == 0) && lbr->count; lbr++ )
>>>                  for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
>>>                      if ( (rc = vmx_add_guest_msr(v, lbr->base + i)) == 0 )
>>> -                    {
>>>                          vmx_clear_msr_intercept(v, lbr->base + i, 
> VMX_MSR_RW);
>>> -                        if ( lbr_tsx_fixup_needed )
>>> -                            v->arch.hvm_vmx.lbr_fixup_enabled |= 
> FIXUP_LBR_TSX;
>>> -                        if ( bdw_erratum_bdf14_fixup_needed )
>>> -                            v->arch.hvm_vmx.lbr_fixup_enabled |=
>>> -                                FIXUP_BDW_ERRATUM_BDF14;
>>> -                    }
>>> +
>>> +            v->arch.hvm_vmx.lbr_flags |= LBR_MSRS_INSERTED;
>>> +            if ( lbr_tsx_fixup_needed )
>>> +                v->arch.hvm_vmx.lbr_flags |= LBR_FIXUP_TSX;
>>> +            if ( bdw_erratum_bdf14_fixup_needed )
>>> +                v->arch.hvm_vmx.lbr_flags |= LBR_FIXUP_BDF14;
>> Note how the setting of the flags previously depended on
>> vmx_add_guest_msr() having returned success at least once.
> 
> And?
> 
> Unless this sequence returns fully successfully, we throw #MC into the
> guest without setting any kind of vMCE state.  It might be the least bad
> option we have available, but its also not reasonable to expect the
> guest to survive.
> 
> The two ways to fail are ENOMEM which E2BIG.  The former is going to be
> causing other forms of chaos, and the latter isn't going to occur in
> practice because current codepaths in Xen use a maximum of ~40 or the
> 256 available slots.  If in the unlikely case that we fail with ENOMEM
> on the first entry, all the fixup logic gets short circuited due to the
> missing memory allocation (so practically 0 extra overhead), and the
> guest will still malfunction.
> 
> The error handling here is sufficiently poor that I'm not worried about
> changing one minor corner case.  I'm actually debating whether it would
> be better to make the allocation at vmcs construction time, to avoid
> runtime out-of-memory issues.

With further improved MSR handling down the road, I assume we'll
have some entries in the list in almost all cases, so yes, I think that
would be desirable.

>>> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
>>> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
>>> @@ -156,7 +156,12 @@ struct arch_vmx_struct {
>>>      /* Are we emulating rather than VMENTERing? */
>>>      uint8_t              vmx_emulate;
>>>  
>>> -    uint8_t              lbr_fixup_enabled;
>>> +    /* Flags for LBR MSRs in the load/save lists. */
>>> +#define LBR_MSRS_INSERTED  (1u << 0)
>>> +#define LBR_FIXUP_TSX      (1u << 1)
>>> +#define LBR_FIXUP_BDF14    (1u << 2)
>>> +#define LBR_FIXUP_MASK     (LBR_FIXUP_TSX | LBR_FIXUP_BDF14)
>>> +    uint8_t              lbr_flags;
>> I'm not overly happy with these getting moved to a non-private header,
>> but I assume you need to use the new flag in vmcs.c in a later patch.
>> Let's hope no other LBR_-prefixed names appear elsewhere in the code.
> 
> No - no use in a later patch.  They are moved here so they are next to
> the data field they are used for.  The previous code having random
> defines remote from, and not obviously linked with, this data field is
> detrimental to code quality and clarity.

A comment at both sites providing the link would already help. Just
like I would always advocate for struct-s to be fully declared locally only
when they're used in just a single source file, I'd also prefer #define-s
to have as little visibility as possible. Anyway - that's something to be
decided by the VMX maintainers in this specific case.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.