[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v2 2/3] x86/shadow: mark more of sh_page_fault() HVM-only
The types p2m_is_readonly() checks for aren't applicable to PV; specifically get_gfn() won't ever return any such type for PV domains. Extend the HVM-conditional block of code, also past the subsequent HVM- only if(). This way the "emulate_readonly" also becomes unreachable when !HVM, so move the conditional there upwards as well. Noticing the earlier shadow_mode_refcounts() check, move it up even further, right after that check. With that, the "done" label also needs marking as potentially unused. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- v2: Parts split off to a subsequent patch. --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -2613,8 +2613,6 @@ static int cf_check sh_page_fault( ? EXCRET_fault_fixed : 0; } -#endif /* CONFIG_HVM */ - /* Ignore attempts to write to read-only memory. */ if ( p2m_is_readonly(p2mt) && (ft == ft_demand_write) ) goto emulate_readonly; /* skip over the instruction */ @@ -2633,12 +2631,14 @@ static int cf_check sh_page_fault( goto emulate; } +#endif /* CONFIG_HVM */ + perfc_incr(shadow_fault_fixed); d->arch.paging.log_dirty.fault_count++; sh_reset_early_unshadow(v); trace_shadow_fixup(gw.l1e, va); - done: + done: __maybe_unused; sh_audit_gw(v, &gw); SHADOW_PRINTK("fixed\n"); shadow_audit_tables(v); @@ -2650,6 +2650,7 @@ static int cf_check sh_page_fault( if ( !shadow_mode_refcounts(d) || !guest_mode(regs) ) goto not_a_shadow_fault; +#ifdef CONFIG_HVM /* * We do not emulate user writes. Instead we use them as a hint that the * page is no longer a page table. This behaviour differs from native, but @@ -2677,7 +2678,6 @@ static int cf_check sh_page_fault( goto not_a_shadow_fault; } -#ifdef CONFIG_HVM /* Unshadow if we are writing to a toplevel pagetable that is * flagged as a dying process, and that is not currently used. */ if ( sh_mfn_is_a_page_table(gmfn) && is_hvm_domain(d) &&
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |