[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
The reserved_bit_page_fault() paths effectively turn reserved bit faults into a warning, but in the light of L1TF, the real impact is far more serious. Xen does not have any reserved bits set in its pagetables, nor do we permit PV guests to write any. An HVM shadow guest may have reserved bits via the MMIO fastpath, but those faults are handled in the VMExit #PF intercept, rather than Xen's #PF handler. There is no need to disable interrupts (in spurious_page_fault()) for __page_fault_type() to look at the rsvd bit, nor should extable fixup be tolerated. Make #PF[Rsvd] a hard error, irrespective of mode. Any new panic() caused by this constitutes an L1TF gadget needing fixing. Additionally, drop the comment for do_page_fault(). It is inaccurate (bit 0 being set isn't always a protection violation) and stale (missing bits 5,6,15,31). Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- CC: Jan Beulich <JBeulich@xxxxxxxx> CC: Wei Liu <wl@xxxxxxx> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx> --- xen/arch/x86/traps.c | 39 +++++++++++++-------------------------- 1 file changed, 13 insertions(+), 26 deletions(-) diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index 73c6218660..4f8e3c7a32 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -1137,15 +1137,6 @@ void do_int3(struct cpu_user_regs *regs) pv_inject_hw_exception(TRAP_int3, X86_EVENT_NO_EC); } -static void reserved_bit_page_fault(unsigned long addr, - struct cpu_user_regs *regs) -{ - printk("%pv: reserved bit in page table (ec=%04X)\n", - current, regs->error_code); - show_page_walk(addr); - show_execution_state(regs); -} - #ifdef CONFIG_PV static int handle_ldt_mapping_fault(unsigned int offset, struct cpu_user_regs *regs) @@ -1248,10 +1239,6 @@ static enum pf_type __page_fault_type(unsigned long addr, if ( in_irq() ) return real_fault; - /* Reserved bit violations are never spurious faults. */ - if ( error_code & PFEC_reserved_bit ) - return real_fault; - required_flags = _PAGE_PRESENT; if ( error_code & PFEC_write_access ) required_flags |= _PAGE_RW; @@ -1413,14 +1400,6 @@ static int fixup_page_fault(unsigned long addr, struct cpu_user_regs *regs) return 0; } -/* - * #PF error code: - * Bit 0: Protection violation (=1) ; Page not present (=0) - * Bit 1: Write access - * Bit 2: User mode (=1) ; Supervisor mode (=0) - * Bit 3: Reserved bit violation - * Bit 4: Instruction fetch - */ void do_page_fault(struct cpu_user_regs *regs) { unsigned long addr, fixup; @@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs) if ( unlikely(fixup_page_fault(addr, regs) != 0) ) return; + /* + * Xen have reserved bits in its pagetables, nor do we permit PV guests to + * write any. Such entries would be vulnerable to the L1TF sidechannel. + * + * The only logic which intentionally sets reserved bits is the shadow + * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be + * L1TF-vulnerable, and handled via the VMExit #PF intercept path, rather + * than here. + */ + if ( error_code & PFEC_reserved_bit ) + goto fatal; + if ( unlikely(!guest_mode(regs)) ) { enum pf_type pf_type = spurious_page_fault(addr, regs); @@ -1457,13 +1448,12 @@ void do_page_fault(struct cpu_user_regs *regs) if ( likely((fixup = search_exception_table(regs)) != 0) ) { perfc_incr(copy_user_faults); - if ( unlikely(regs->error_code & PFEC_reserved_bit) ) - reserved_bit_page_fault(addr, regs); this_cpu(last_extable_addr) = regs->rip; regs->rip = fixup; return; } + fatal: if ( debugger_trap_fatal(TRAP_page_fault, regs) ) return; @@ -1475,9 +1465,6 @@ void do_page_fault(struct cpu_user_regs *regs) error_code, _p(addr)); } - if ( unlikely(regs->error_code & PFEC_reserved_bit) ) - reserved_bit_page_fault(addr, regs); - pv_inject_page_fault(regs->error_code, addr); } -- 2.11.0
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |