[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] open-coded page list manipulation in shadow code
>>> On 29.01.15 at 18:30, <tim@xxxxxxx> wrote: > OK, here's what I have. It keeps the mechanism the same, threading on > the main page list entry, but tidies it up to use the page_list operations > rather than open-coding. I've tested it (lightly - basically jsut > boot testing) with 32bit, 32pae and 64bit Windows VMs (all SMP), with > bigmem=y and bigmem=n. > > It was developed on top of your bigmem series, but it should apply OK > before patch 3/4, removing the need to nobble shadow-paging in that > patch. Thanks, looks quite okay, just a couple of remarks. > @@ -1525,6 +1495,14 @@ mfn_t shadow_alloc(struct domain *d, > if ( shadow_type >= SH_type_min_shadow > && shadow_type <= SH_type_max_shadow ) > sp->u.sh.head = 1; > + > +#ifndef PAGE_LIST_NULL > + /* The temporary list-head is on our stack. Blank out the > + * pointers to it in the shadows, just to get a clean failure if > + * we accidentally follow them. */ > + tmp_list.next->prev = tmp_list.prev->next = NULL; > +#endif Perhaps better to use LIST_POISON{1,2} here, so we don't point into PV VA space? > --- a/xen/arch/x86/mm/shadow/private.h > +++ b/xen/arch/x86/mm/shadow/private.h > @@ -318,6 +318,33 @@ static inline int mfn_oos_may_write(mfn_t gmfn) > } > #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */ > > +/* Figure out the size (in pages) of a given shadow type */ > +static inline u32 > +shadow_size(unsigned int shadow_type) > +{ > + static const u32 type_to_size[SH_type_unused] = { > + 1, /* SH_type_none */ > + 2, /* SH_type_l1_32_shadow */ > + 2, /* SH_type_fl1_32_shadow */ > + 4, /* SH_type_l2_32_shadow */ > + 1, /* SH_type_l1_pae_shadow */ > + 1, /* SH_type_fl1_pae_shadow */ > + 1, /* SH_type_l2_pae_shadow */ > + 1, /* SH_type_l2h_pae_shadow */ > + 1, /* SH_type_l1_64_shadow */ > + 1, /* SH_type_fl1_64_shadow */ > + 1, /* SH_type_l2_64_shadow */ > + 1, /* SH_type_l2h_64_shadow */ > + 1, /* SH_type_l3_64_shadow */ > + 1, /* SH_type_l4_64_shadow */ > + 1, /* SH_type_p2m_table */ > + 1, /* SH_type_monitor_table */ > + 1 /* SH_type_oos_snapshot */ > + }; > + ASSERT(shadow_type < SH_type_unused); > + return type_to_size[shadow_type]; > +} Isn't this going to lead to multiple instances of that static array? > @@ -668,46 +697,40 @@ static inline int sh_pin(struct vcpu *v, mfn_t smfn) > * of pinned shadows, and release the extra ref. */ > static inline void sh_unpin(struct vcpu *v, mfn_t smfn) > { > - struct page_list_head h, *pin_list; > - struct page_info *sp; > - > + struct page_list_head tmp_list, *pin_list; > + struct page_info *sp, *next; > + unsigned int i, head_type; > + > ASSERT(mfn_valid(smfn)); > sp = mfn_to_page(smfn); > + head_type = sp->u.sh.type; > ASSERT(sh_type_is_pinnable(v, sp->u.sh.type)); > ASSERT(sp->u.sh.head); > > - /* Treat the up-to-four pages of the shadow as a unit in the list ops */ > - h.next = h.tail = sp; > - if ( sp->u.sh.type == SH_type_l2_32_shadow ) > - { > - h.tail = pdx_to_page(h.tail->list.next); > - h.tail = pdx_to_page(h.tail->list.next); > - h.tail = pdx_to_page(h.tail->list.next); > - ASSERT(h.tail->u.sh.type == SH_type_l2_32_shadow); > - } > - pin_list = &v->domain->arch.paging.shadow.pinned_shadows; > - > if ( !sp->u.sh.pinned ) > return; > - > sp->u.sh.pinned = 0; > > - /* Cut the sub-list out of the list of pinned shadows */ > - if ( pin_list->next == h.next && pin_list->tail == h.tail ) > - pin_list->next = pin_list->tail = NULL; > - else > + /* Cut the sub-list out of the list of pinned shadows, > + * stitching it back into a list fragment of its own. */ > + pin_list = &v->domain->arch.paging.shadow.pinned_shadows; > + INIT_PAGE_LIST_HEAD(&tmp_list); > + for ( i = 0; i < shadow_size(head_type); i++ ) > { > - if ( pin_list->next == h.next ) > - pin_list->next = page_list_next(h.tail, pin_list); > - else > - page_list_prev(h.next, pin_list)->list.next = h.tail->list.next; > - if ( pin_list->tail == h.tail ) > - pin_list->tail = page_list_prev(h.next, pin_list); > - else > - page_list_next(h.tail, pin_list)->list.prev = h.next->list.prev; > + ASSERT(sp->u.sh.type == head_type); > + ASSERT(!i || !sp->u.sh.head); > + next = page_list_next(sp, pin_list); > + page_list_del(sp, pin_list); > + page_list_add_tail(sp, &tmp_list); > + sp = next; > } > - h.tail->list.next = h.next->list.prev = PAGE_LIST_NULL; > - > +#ifndef PAGE_LIST_NULL > + /* The temporary list-head is on our stack. Blank out the > + * pointers to it in the shadows, just to get a clean failure if > + * we accidentally follow them. */ > + tmp_list.next->prev = tmp_list.prev->next = NULL; > +#endif Same here. Perhaps worth putting into another inline helper? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |