[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: synchronize PCI config space access decoding



>>> On 10.03.15 at 12:24, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 10/03/15 07:30, Jan Beulich wrote:
>>>>> On 09.03.15 at 19:49, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 09/03/15 16:08, Jan Beulich wrote:
>>>> Both PV and HVM logic have similar but not similar enough code here.
>>>> Synchronize the two so that
>>>> - in the HVM case we don't unconditionally try to access extended
>>>>   config space
>>>> - in the PV case we pass a correct range to the XSM hook
>>>> - in the PV case we don't needlessly deny access when the operation
>>>>   isn't really on PCI config space
>>>> All this along with sharing the macros HVM already had here.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>>
>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>> @@ -2383,11 +2383,6 @@ void hvm_vcpu_down(struct vcpu *v)
>>>>  static struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
>>>>                                                          ioreq_t *p)
>>>>  {
>>>> -#define CF8_BDF(cf8)     (((cf8) & 0x00ffff00) >> 8)
>>>> -#define CF8_ADDR_LO(cf8) ((cf8) & 0x000000fc)
>>>> -#define CF8_ADDR_HI(cf8) (((cf8) & 0x0f000000) >> 16)
>>>> -#define CF8_ENABLED(cf8) (!!((cf8) & 0x80000000))
>>>> -
>>>>      struct hvm_ioreq_server *s;
>>>>      uint32_t cf8;
>>>>      uint8_t type;
>>>> @@ -2416,9 +2411,19 @@ static struct hvm_ioreq_server *hvm_sele
>>>>  
>>>>          type = IOREQ_TYPE_PCI_CONFIG;
>>>>          addr = ((uint64_t)sbdf << 32) |
>>>> -               CF8_ADDR_HI(cf8) |
>>>>                 CF8_ADDR_LO(cf8) |
>>>>                 (p->addr & 3);
>>>> +        /* AMD extended configuration space access? */
>>>> +        if ( CF8_ADDR_HI(cf8) &&
>>>> +             boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
>>>> +             boot_cpu_data.x86 >= 0x10 && boot_cpu_data.x86 <= 0x17 )
>>>> +        {
>>>> +            uint64_t msr_val;
>>>> +
>>>> +            if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
>>>> +                 (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
>>>> +                addr |= CF8_ADDR_HI(cf8);
>>> This is another example of host state which leaks into guests across
>>> migrate, but in this case is also problematic at the host level.
>> Yes, but cross-vendor migration has (iirc) many more issues like this
>> (and considering the wide family range the risk of this breaking for
>> migration between AMD systems seems marginal).
> 
> I wasn't even considering cross-vendor migration, but that is another
> concern.  I was more concerned with leaking bios-configured state into
> the guest.
> 
>>
>>> As far as the host goes, MSR_AMD64_NB_CFG is a per-node msr and Xen
>>> should verify that the AMD64_NB_CFG_CF8_EXT_ENABLE_BIT is consistent
>>> across the system, or bits of emulate_privileged_op() are liable to
>>> execute differently depending on which pcpu a vcpu happens to be scheduled.
>> I think this goes too far in mistrusting Dom0.
> 
> The only case where dom0 could plausibly set this up consistently even
> if wanted to, is when it has a vcpu for each pcpu and is using
> dom0_vcpu_pin.  Either of these conditions is rare in practice.

Did you look at Linux? In order to avoid these preconditions, I
specifically made it try via PCI config space accesses first a couple
of years ago.

> I still think it is Xen which needs to set this up consistently on boot,
> at which point removing all the the rdmsr_safe() from cf8 accesses is
> trivial.

Since Xen is not itself using this mechanism (maybe it should), it
seems wrong for it to configure it any specific way (i.e. Dom0 may
also rely on it being off). I btw also don't think the BIOS should
enable this, at least not without being told so. And if it does, then
please consistently for the whole system.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.