|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCHv6 5/5] gnttab: use per-VCPU maptrack free lists
>>> On 22.04.15 at 18:00, <david.vrabel@xxxxxxxxxx> wrote:
> static inline int
> get_maptrack_handle(
> struct grant_table *lgt)
> {
> + struct vcpu *v = current;
> int i;
> grant_handle_t handle;
> struct grant_mapping *new_mt;
> unsigned int new_mt_limit, nr_frames;
>
> - spin_lock(&lgt->maptrack_lock);
> -
> - while ( unlikely((handle = __get_maptrack_handle(lgt)) == -1) )
> - {
> - nr_frames = nr_maptrack_frames(lgt);
> - if ( nr_frames >= max_maptrack_frames )
> - break;
> -
> - new_mt = alloc_xenheap_page();
> - if ( !new_mt )
> - break;
> + handle = __get_maptrack_handle(lgt, v);
> + if ( likely(handle != -1) )
> + return handle;
>
> - clear_page(new_mt);
> + nr_frames = nr_vcpu_maptrack_frames(v);
> + if ( nr_frames >= max_maptrack_frames )
> + return -1;
>
> - new_mt_limit = lgt->maptrack_limit + MAPTRACK_PER_PAGE;
> + new_mt = alloc_xenheap_page();
> + if ( !new_mt )
> + return -1;
>
> - for ( i = 1; i < MAPTRACK_PER_PAGE; i++ )
> - new_mt[i - 1].ref = lgt->maptrack_limit + i;
> - new_mt[i - 1].ref = lgt->maptrack_head;
> - lgt->maptrack_head = lgt->maptrack_limit;
> + clear_page(new_mt);
>
> - lgt->maptrack[nr_frames] = new_mt;
> - smp_wmb();
> - lgt->maptrack_limit = new_mt_limit;
> + new_mt_limit = v->maptrack_limit + MAPTRACK_PER_PAGE;
>
> - gdprintk(XENLOG_INFO, "Increased maptrack size to %u frames\n",
> - nr_frames + 1);
> + for ( i = 1; i < MAPTRACK_PER_PAGE; i++ ){
> + new_mt[i - 1].ref = (lgt->maptrack_pages * MAPTRACK_PER_PAGE) + i;
> + new_mt[i - 1].vcpu = v->vcpu_id;
> + }
> + /* Set last entry vcpu and ref */
> + new_mt[i - 1].ref = v->maptrack_head;
> + new_mt[i - 1].vcpu = v->vcpu_id;
> + v->maptrack_head = lgt->maptrack_pages * MAPTRACK_PER_PAGE;
> + if (v->maptrack_tail == MAPTRACK_TAIL)
> + {
> + v->maptrack_tail = (lgt->maptrack_pages * MAPTRACK_PER_PAGE)
> + + MAPTRACK_PER_PAGE - 1;
> + new_mt[i - 1].ref = MAPTRACK_TAIL;
> }
>
> + spin_lock(&lgt->maptrack_lock);
> + lgt->maptrack[lgt->maptrack_pages++] = new_mt;
> spin_unlock(&lgt->maptrack_lock);
The uses of ->maptrack_pages ahead of taking the lock can race
with updates inside the lock. And with locking elsewhere dropped
by the previous patch it looks like you can't update ->maptrack[]
like you do (you'd need a barrier between the pointer store and
the increment, and with that I think the lock would become
superfluous if it was needed only for this update).
Also note the coding style issues in the changes above^(and also
in code further down).
> - return handle;
> + v->maptrack_limit = new_mt_limit;
> +
> + return __get_maptrack_handle(lgt, v);
With the lock dropped, nothing guarantees this to succeed, which it
ought to unless the table size reached its allowed maximum.
> @@ -1422,6 +1448,17 @@ gnttab_setup_table(
> }
> gt->maptrack_pages = 0;
>
> + /* Tracking of mapped foreign frames table */
> + if ( (gt->maptrack = xzalloc_array(struct grant_mapping *,
> + max_maptrack_frames * d->max_vcpus))
> == NULL )
> + goto out2;
See the comments on the similar misplaced hunk in the previous
patch.
> --- a/xen/include/xen/grant_table.h
> +++ b/xen/include/xen/grant_table.h
> @@ -60,6 +60,8 @@ struct grant_mapping {
> u32 ref; /* grant ref */
> u16 flags; /* 0-4: GNTMAP_* ; 5-15: unused */
> domid_t domid; /* granting domain */
> + u32 vcpu; /* vcpu which created the grant mapping */
> + u16 pad[2];
> };
What is this pad[] good for?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |