[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] include/public/memory.h: remove the XENMEM_rsrc_acq_caller_owned flag



On 19.07.2019 14:25, Paul Durrant wrote:
When commit 3f8f1228 "x86/mm: add HYPERVISOR_memory_op to acquire guest
resources" introduced the concept of directly mapping some guest resources,
it was envisaged that the memory for some resources associated with a guest
may not actually be assigned to that guest, specifically the IOREQ server
resource introduces in commit 6e387461 "x86/hvm/ioreq: add a new mappable
resource type...". Such resources were dubbed "caller owned" and resulted
in the owned resources" and acquiring them resulted in the
XENMEM_rsrc_acq_caller_owned flag being passed back to the caller of the
memory op.

Unfortunately the implementation led to XSA-276, which was mitigated
by commit f6b6ae78 "x86/hvm/ioreq: fix page referencing" and then a related
memory accounting problem was worked around by commit e862e6ce
"x86/hvm/ioreq: use ref-counted target-assigned shared pages". This latter
commit removed the only instance of a "caller owned" resource, but the
flag was left in header and checked in one place in the core code.
This patch removes that now redundant check and removes the definition of
XENMEM_rsrc_acq_caller_owned from the public header. Also, since this was
the only flag defined for the XENMEM_acquire_resource memory op, it removes
the 'flags' field of struct xen_mem_acquire_resource and replaces it with
an equivalently sized 'pad' field.

Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

I notice this has now been committed, but I didn't see any further
discussion, i.e. in particular it is unclear to me at this point if
Bitdefender have found a different solution for their change using
this flag.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.