[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: don't give the wrong impression of WRMSR succeeding

>>> On 22.02.18 at 15:53, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 22/02/18 13:44, Jan Beulich wrote:
>> ... for unknown MSRs: wrmsr_hypervisor_regs()'s comment clearly says
>> that the function returns 0 for unrecognized MSRs, so
>> {svm,vmx}_msr_write_intercept() should not convert this into success.
>> At the time it went in, commit 013e34f5a6 ("x86: handle paged gfn in
>> wrmsr_hypervisor_regs") was probably okay, since prior to that the
>> return value wasn't checked at all. But that's not how we want things
>> to be handled nowadays.
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> I agree in principle, but this does have a large potential risk for
> guests.  Any unknown MSR which guests don't check for #GP faults from
> will now cause the guests to crash.
> That said, it is the correct direction to go long-term, and we've got to
> throw the switch some time, but I expect this will cause problems in the
> short term, especially for migrated-in guests.
> As for the making this change, there is a better way of doing it by
> moving viridian and Xen MSR handing into the new guest_{rd,wr}msr()
> infrastructure, which means we won't call into both of these subsystems
> for every unknown MSR.

That would be harder to backport, and the original problem report
was on 4.7. I'll quote the offending Linux code for you:

static inline void cache_alloc_hsw_probe(void)
        struct rdt_resource *r  = &rdt_resources_all[RDT_RESOURCE_L3];
        u32 l, h, max_cbm = BIT_MASK(20) - 1;

        if (wrmsr_safe(IA32_L3_CBM_BASE, max_cbm, 0))
        rdmsr(IA32_L3_CBM_BASE, l, h);

That is, they assume that a successful WRMSR for this MSR
means the RDMSR is expected to also succeed. Quite
reasonable an expectation (leaving aside the fact that the
whole logic there looks rather clumsy), which we can't meet
without biting the bullet and accepting the risk you describe.

Do you have any other suggestion on how to eliminate the
risk while still addressing the problem at hand?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.