[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/3] grant-table: refactor grant copy to reduce duplicate code
Hi, At 18:19 +0000 on 20 Jan (1421774388), David Vrabel wrote: > +static s16 gnttab_copy_lock_domains(const struct gnttab_copy *op, > + struct gnttab_copy_buf *src, > + struct gnttab_copy_buf *dest) > +{ > + s16 rc; > + > + rc = gnttab_copy_lock_domain(op, op->source.domid, GNTCOPY_source_gref, > src); > + if ( rc < 0 ) > + return rc; > + rc = gnttab_copy_lock_domain(op, op->dest.domid, GNTCOPY_dest_gref, > dest); > + if ( rc < 0 ) > + return rc; > > - if ( src_is_gref ) > + rc = xsm_grant_copy(XSM_HOOK, src->domain, dest->domain); > + if ( rc < 0 ) > { [...] > + gnttab_copy_unlock_domains(src, dest); > + return GNTST_permission_denied; This error path unwinds the locks we've already taken, where the one above (failing to lock dest) doesn't. AFAICS this is OK because the only caller always unconditionally calls unlock, but they ought to be consistent. > +static s16 gnttab_copy_claim_buf( > + const struct gnttab_copy *op, > + const struct gnttab_copy_ptr *ptr, > + struct gnttab_copy_buf *buf, > + unsigned int gref_flag) > +{ > + s16 rc; > + > + buf->read_only = gref_flag == GNTCOPY_source_gref; > > - if ( dest_is_gref ) > + if ( op->flags & gref_flag ) > { [...] > + rc = __acquire_grant_for_copy(buf->domain, ptr->u.ref, > + current->domain->domain_id, > buf->read_only, > + &buf->frame, &buf->page, > + &buf->ptr.offset, &buf->len, 1); > if ( rc != GNTST_okay ) [...] > + goto out; > + buf->ptr.u.ref = ptr->u.ref; > + buf->have_grant = 1; > } > else > { [...] > + rc = __get_paged_frame(ptr->u.gmfn, &buf->frame, &buf->page, 1, > buf->domain); AFAICS this should pass buf->readonly as arg 4, so as not to break copy-to-MFN. Apart from that, and the mark-dirty that Jan spotted, look good to me. Cheers, Tim. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |