[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
On 19/05/2020 09:34, Jan Beulich wrote: > On 18.05.2020 17:38, Andrew Cooper wrote: >> @@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs) >> if ( unlikely(fixup_page_fault(addr, regs) != 0) ) >> return; >> >> + /* >> + * Xen have reserved bits in its pagetables, nor do we permit PV guests >> to >> + * write any. Such entries would be vulnerable to the L1TF sidechannel. >> + * >> + * The only logic which intentionally sets reserved bits is the shadow >> + * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be >> + * L1TF-vulnerable, and handled via the VMExit #PF intercept path, >> rather >> + * than here. > What about SH_L1E_MAGIC and sh_l1e_gnp()? The latter gets used by > _sh_propagate() without visible restriction to HVM. SH_L1E_MAGIC looks to be redundant with SH_L1E_MMIO_MAGIC. sh_l1e_mmio() is the only path which ever creates an entry like that. sh_l1e_gnp() is a very well hidden use of reserved bits, but surely can't be used for PV guests, as there doesn't appear to be anything to turn the resulting fault back into a plain not-present. > And of course every time I look at this code I wonder how we can > get away with (quoting a comment) "We store 28 bits of GFN in > bits 4:32 of the entry." Do we have a hidden restriction > somewhere guaranteeing that guests won't have (emulated MMIO) > GFNs above 1Tb when run in shadow mode? I've raised that several times before. Its broken. Given that shadow frames are limited to 44 bits anyway (and not yet levelled safely in the migration stream), my suggestion for fixing this was just to use one extra nibble for the extra 4 bits and call it done. ~Andrew
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |