[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
On 19/05/2020 09:14, Jan Beulich wrote: > On 18.05.2020 17:38, Andrew Cooper wrote: >> The reserved_bit_page_fault() paths effectively turn reserved bit faults into >> a warning, but in the light of L1TF, the real impact is far more serious. >> >> Xen does not have any reserved bits set in its pagetables, nor do we permit >> PV >> guests to write any. An HVM shadow guest may have reserved bits via the MMIO >> fastpath, but those faults are handled in the VMExit #PF intercept, rather >> than Xen's #PF handler. >> >> There is no need to disable interrupts (in spurious_page_fault()) for >> __page_fault_type() to look at the rsvd bit, nor should extable fixup be >> tolerated. > I'm afraid I don't understand the connection of the first half of this > to the patch - you don't alter spurious_page_fault() in this regard (at > all, actually). The disabling interrupts is in spurious_page_fault(). But the point is that there is no need to enter this logic at all for a reserved page fault. > > As to extable fixup, I'm not sure: If a reserved bit ends up slipping > into the non-Xen parts of the page tables, and if guest accessors then > become able to trip a corresponding #PF, the bug will need an XSA with > the proposed change, while - afaict - it won't if the exception gets > recovered from. (There may then still be log spam issue, I admit.) We need to issue an XSA anyway because such a construct would be an L1TF gadget. What this change does is make it substantially more obvious, and turns an information leak into a DoS. >> @@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs) >> if ( unlikely(fixup_page_fault(addr, regs) != 0) ) >> return; >> >> + /* >> + * Xen have reserved bits in its pagetables, nor do we permit PV guests >> to >> + * write any. Such entries would be vulnerable to the L1TF sidechannel. >> + * >> + * The only logic which intentionally sets reserved bits is the shadow >> + * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be >> + * L1TF-vulnerable, and handled via the VMExit #PF intercept path, >> rather >> + * than here. >> + */ >> + if ( error_code & PFEC_reserved_bit ) >> + goto fatal; > Judging from the description, wouldn't this then better go even further > up, ahead of the fixup_page_fault() invocation? In fact the function > has two PFEC_reserved_bit checks to _avoid_ taking action, which look > like they could then be dropped. Only for certain Xen-only fixup. The path into paging_fault() is not guarded. Depending on whether GNP is actually used for PV guests, this is where it would be fixed up. ~Andrew
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |