At 00:33 -0400 on 27 Oct (1319675629), Andres Lagar-Cavilla wrote:
> The PoD layer has a fragile locking discipline. It relies on the
> p2m being globally locked, and it also relies on the page alloc
> lock to protect some of its data structures. Replace this all by an
> explicit pod lock: per p2m, order enforced.
>
> Two consequences:
> - Critical sections in the pod code protected by the page alloc
> lock are now reduced to modifications of the domain page list.
> - When the p2m lock becomes fine-grained, there are no
> assumptions broken in the PoD layer.
>
> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
The bulk of this looks OK to me, but will definitely need an Ack from
George Dunlap as well. Two comments:
> diff -r 332775f72a30 -r 981073d78f7f xen/arch/x86/mm/mm-locks.h
> --- a/xen/arch/x86/mm/mm-locks.h
> +++ b/xen/arch/x86/mm/mm-locks.h
> @@ -155,6 +155,15 @@ declare_mm_lock(p2m)
> #define p2m_unlock(p) mm_unlock(&(p)->lock)
> #define p2m_locked_by_me(p) mm_locked_by_me(&(p)->lock)
>
> +/* PoD lock (per-p2m-table)
> + *
> + * Protects private PoD data structs. */
> +
> +declare_mm_lock(pod)
> +#define pod_lock(p) mm_lock(pod, &(p)->pod.lock)
> +#define pod_unlock(p) mm_unlock(&(p)->pod.lock)
> +#define pod_locked_by_me(p) mm_locked_by_me(&(p)->pod.lock)
Can the explanatory comment be more explicit about what it covers? It is
everything in the struct pod or just the page-lists that were mentioned
in the comment you removed from p2m.h?
> @@ -841,7 +854,6 @@ p2m_pod_zero_check(struct p2m_domain *p2
> if( *(map[i]+j) != 0 )
> break;
>
> - unmap_domain_page(map[i]);
>
> /* See comment in p2m_pod_zero_check_superpage() re gnttab
> * check timing. */
> @@ -849,8 +861,15 @@ p2m_pod_zero_check(struct p2m_domain *p2
> {
> set_p2m_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
> types[i], p2m->default_access);
> + unmap_domain_page(map[i]);
> + map[i] = NULL;
> }
> - else
> + }
> +
> + /* Finally, add to cache */
> + for ( i=0; i < count; i++ )
> + {
> + if ( map[i] )
> {
> if ( tb_init_done )
> {
> @@ -867,6 +886,8 @@ p2m_pod_zero_check(struct p2m_domain *p2
> __trace_var(TRC_MEM_POD_ZERO_RECLAIM, 0, sizeof(t), &t);
> }
>
> + unmap_domain_page(map[i]);
> +
> /* Add to cache, and account for the new p2m PoD entry */
> p2m_pod_cache_add(p2m, mfn_to_page(mfns[i]), PAGE_ORDER_4K);
> p2m->pod.entry_count++;
That seems to be reshuffling the running order of this function but I
don't see how it's related to locking. Is this an unrelated change
that snuck in?
(Oh, a third thing just occurred to me - might be worth making some of
those 'lock foo held on entry' comments into ASSERT(lock_held_by_me()). )
Cheers,
Tim.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|