[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3 V3] XSA-60 security hole: cr0.cd handling

Jan Beulich wrote:
>>>> On 24.10.13 at 18:39, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx> wrote:
>> Liu, Jinsong wrote:
>>> Maybe Jun's concern is 'guest PAT (real pat of vmcs which take
>>> effect, not nominal guest_pat) should be identical among all
>>> physical processors which run vcpus of that guest', am I right, Jun?
>>> One thing I'm not sure is, per Intel SDM (8.7.4 of volume 3), the
>>> PAT MSR settings must be the same for all processors in a system.
>>> However, Xen obviously doesn't satisfy this requirement: PAT of the
>>> cpus running vmm context (50100070406) is not identical to PAT of
>>> the cpus running guest context (take rhel6.4 guest as example, it's
>>> 7010600070106) -- practically it works fine.
>> Or, PAT requirement under virtualization would better be 'PAT MSR
>> settings must be the same for all processors of a domain (take vmm
>> as a special domain)'? otherwise IA32_PAT fields of vmcs is
>> pointless. 
>> Anyway, we'd better change our patch from per-vcpu PAT emulation to
>> per-domain PAT emulation. Does it make sense, Jun?
> I don't think that's be in line with what we currently do, or with how
> real hardware works. Unless inconsistencies between PAT settings
> can be leveraged to affect the hypervisor or other guests, we
> should allow the guest to have them inconsistent (as would be the
> natural thing transiently when switching to a new value on all CPUs).
> And if inconsistencies can have effects outside the VM, then afaict
> we have this issue already without this patch.

Agree, let's keep current per-vcpu PAT emulation.

> While mentally going through this logic again I noticed, however,
> that the cache flushing your patch is doing is still insufficient:
> Doing this just when CD gets set and in the context switch path is not
> enough. This needs to be done prior to each VM entry, unless it
> can be proven that the hypervisor (or the service domain) didn't
> touch guest memory.

I think it's safe: it only need guarantee no vcpu guest context involved into 
the small window between cache flushing and TLB invalidation -- after that it 
doesn't care whether hypervisor touch guest memory or not, since cache is clear 
and old memory type in TLB is invalidated (and is UC afterwards), so no cache 
line will be polluted by guest context any more.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.