[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
On 20/05/2020 08:48, Jan Beulich wrote: > On 19.05.2020 20:00, Andrew Cooper wrote: >> On 19/05/2020 17:09, Jan Beulich wrote: >>> In any event there would be 12 bits to reclaim from the up >>> pointer - it being a physical address, there'll not be more >>> than 52 significant bits. >> Right, but for L1TF safety, the address bits in the PTE must not be >> cacheable. > So if I understand this right, your response was only indirectly > related to what I said: You mean that no matter whether we find > a way to store full-width GFNs, SH_L1E_MMIO_MAGIC can't have > arbitrarily many set bits dropped. Yes > On L1TF vulnerable hardware, > that is (i.e. in principle the constant could become a variable > to be determined at boot). The only thing which can usefully be done at runtime disable the fastpath. If cacheable memory overlaps with the used address bits, there are no safe values to use. > >> Currently, on fully populated multi-socket servers, the MMIO fastpath >> relies on the top 4G of address space not being cacheable, which is the >> safest we can reasonably manage. Extending this by a nibble takes us to >> 16G which is not meaningfully less safe. > That's 64G (36 address bits), isn't it? Yes it is. I can't count. > Looking at > l1tf_calculations(), I'd be worried in particular Penryn / > Dunnington might not support more than 36 address bits (I don't > think I have anywhere to check). Even if it was 38, 39, or 40 > bits, 64G becomes a not insignificant part of the overall 256G / > 512G / 1T address space. Then again the top quarter assumption > in l1tf_calculations() would still be met in this latter case. I'm honestly not too worried. Intel has ceased supporting anything older than SandyBridge, and there are other unfixed speculative security issues. Anyone using these processors has bigger problems. ~Andrew
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |