[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3 0/5] fix preemption handling for XENMEM_add_to_physmap_range{, _range}



The hypervisor isn't supposed to use the input structures for
storing continuation information - only fields explicitly used as
hypercall outputs should ever be updated.

Obviously this implies an ABI change, but since the previous
behavior was not intended to be that way I don't think we should
stick to the old behavior.

There's one caveat though - with the previous model, the caller
could - upon failure - use the updated structure to find out how
by progress was made. However, that wasn't intended afaict,
largely supported by this information depending on hypervisor
internals (i.e. the caller would have to know the order of request 
processing and the meaning of the respective size fields, which
aren't simply saying "this much was processed").

Consequently, as a follow-up we may want to consider making
explicit (and straight forward) this progress indication on error.

1: move XENMEM_add_to_physmap handling framework to common code
2: fix XENMEM_add_to_physmap preemption handling
3: move XENMEM_add_to_physmap_range handling framework to common code
4: fix XENMEM_add_to_physmap_range preemption handling
5: rename XENMEM_add_to_physmap_{range => batch}

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
Reviewed-by: Tim Deegan <tim@xxxxxxx>
Acked-by: Keir Fraser <keir@xxxxxxx>
---
v3: Retain restriction of XENMAPSPACE_gmfn_foreign not being
    possible via XENMEM_add_to_physmap (patch 1). Apart from
    patch 5 being new, other patches unchanged .
v2: Apart from the removal of a bogus ASSERT() as requested
    by Tim, only a couple minor/cosmetic changes (see individual
    patches), hence I'm retaining the tags that were already
    given for v1.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.