[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v2 1/9] x86/shadow: replace sh_reset_l3_up_pointers()
Rather than doing a separate hash walk (and then even using the vCPU variant, which is to go away), do the up-pointer-clearing right in sh_unpin(), as an alternative to the (now further limited) enlisting on a "free floating" list fragment. This utilizes the fact that such list fragments are traversed only for multi-page shadows (in shadow_free()). Furthermore sh_terminate_list() is a safe guard only anyway, which isn't in use in the common case (it actually does anything only for BIGMEM configurations). Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- a/xen/arch/x86/include/asm/domain.h +++ b/xen/arch/x86/include/asm/domain.h @@ -116,6 +116,9 @@ struct shadow_domain { /* OOS */ bool_t oos_active; + /* Domain is in the process of leaving SHOPT_LINUX_L3_TOPLEVEL mode. */ + bool unpinning_l3; + #ifdef CONFIG_HVM /* Has this domain ever used HVMOP_pagetable_dying? */ bool_t pagetable_dying_op; --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -2302,29 +2302,6 @@ void shadow_prepare_page_type_change(str /**************************************************************************/ -/* Reset the up-pointers of every L3 shadow to 0. - * This is called when l3 shadows stop being pinnable, to clear out all - * the list-head bits so the up-pointer field is properly inititalised. */ -static int cf_check sh_clear_up_pointer( - struct vcpu *v, mfn_t smfn, mfn_t unused) -{ - mfn_to_page(smfn)->up = 0; - return 0; -} - -void sh_reset_l3_up_pointers(struct vcpu *v) -{ - static const hash_vcpu_callback_t callbacks[SH_type_unused] = { - [SH_type_l3_64_shadow] = sh_clear_up_pointer, - }; - - HASH_CALLBACKS_CHECK(SHF_L3_64); - hash_vcpu_foreach(v, SHF_L3_64, callbacks, INVALID_MFN); -} - - -/**************************************************************************/ - static void sh_update_paging_modes(struct vcpu *v) { struct domain *d = v->domain; --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -960,6 +960,8 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf } if ( l4count > 2 * d->max_vcpus ) { + d->arch.paging.shadow.unpinning_l3 = true; + /* Unpin all the pinned l3 tables, and don't pin any more. */ page_list_for_each_safe(sp, t, &d->arch.paging.shadow.pinned_shadows) { @@ -967,7 +969,8 @@ sh_make_shadow(struct vcpu *v, mfn_t gmf sh_unpin(d, page_to_mfn(sp)); } d->arch.paging.shadow.opt_flags &= ~SHOPT_LINUX_L3_TOPLEVEL; - sh_reset_l3_up_pointers(v); + + d->arch.paging.shadow.unpinning_l3 = false; } } #endif --- a/xen/arch/x86/mm/shadow/private.h +++ b/xen/arch/x86/mm/shadow/private.h @@ -497,11 +497,6 @@ void shadow_blow_tables(struct domain *d */ int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, gfn_t gfn); -/* Reset the up-pointers of every L3 shadow to 0. - * This is called when l3 shadows stop being pinnable, to clear out all - * the list-head bits so the up-pointer field is properly inititalised. */ -void sh_reset_l3_up_pointers(struct vcpu *v); - /****************************************************************************** * Flags used in the return value of the shadow_set_lXe() functions... */ @@ -721,7 +716,7 @@ static inline void sh_unpin(struct domai { struct page_list_head tmp_list, *pin_list; struct page_info *sp, *next; - unsigned int i, head_type; + unsigned int i, head_type, sz; ASSERT(mfn_valid(smfn)); sp = mfn_to_page(smfn); @@ -733,20 +728,30 @@ static inline void sh_unpin(struct domai return; sp->u.sh.pinned = 0; - /* Cut the sub-list out of the list of pinned shadows, - * stitching it back into a list fragment of its own. */ + sz = shadow_size(head_type); + + /* + * Cut the sub-list out of the list of pinned shadows, stitching + * multi-page shadows back into a list fragment of their own. + */ pin_list = &d->arch.paging.shadow.pinned_shadows; INIT_PAGE_LIST_HEAD(&tmp_list); - for ( i = 0; i < shadow_size(head_type); i++ ) + for ( i = 0; i < sz; i++ ) { ASSERT(sp->u.sh.type == head_type); ASSERT(!i || !sp->u.sh.head); next = page_list_next(sp, pin_list); page_list_del(sp, pin_list); - page_list_add_tail(sp, &tmp_list); + if ( sz > 1 ) + page_list_add_tail(sp, &tmp_list); + else if ( head_type == SH_type_l3_64_shadow && + d->arch.paging.shadow.unpinning_l3 ) + sp->up = 0; sp = next; } - sh_terminate_list(&tmp_list); + + if ( sz > 1 ) + sh_terminate_list(&tmp_list); sh_put_ref(d, smfn, 0); }
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |