[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/11] Alternate p2m: support multiple copies of host p2m

On Mon, Jan 12, 2015 at 7:31 PM, Ed White <edmund.h.white@xxxxxxxxx> wrote:
> On 01/12/2015 10:00 AM, Ian Jackson wrote:
>> Ed White writes ("Re: [PATCH 00/11] Alternate p2m: support multiple copies 
>> of host p2m"):
>>> The hypercalls are all there. My testing is all done in a Windows
>>> domU with the tests running inside that domain, so I couldn't use
>>> tools support even if I had it.
>> To support this code in-tree, I think we will need Open Source code
>> for exercising it, surely ?
> I'm hoping that, as Andrew says, there will be people interested
> in using these capabilities, and that some of them will be prepared
> to help fill in the gaps. That's why I wanted to send the series to
> the list very early in the 4.6 development cycle.
> If that doesn't turn out to be the case, I'll see if I can find some
> help internally, but I have neither the bandwidth nor the expertise
> to do everything myself.
> Ed

Hi Ed,
we are certainly very interested in this feature so thanks for posting
this series!

I also see a usecase for multiple copies of host p2m by enabling
better performance for monitoring with the existing memaccess API.
Currently the problem is that if a memaccess violation occurs on one
vcpu, the memaccess settings need to be cleared, then re-applied again
after the operation has passed (usually done via singlestepping). With
multiple vCPUs there is a potential racecondition here, unless all
other vCPUs are paused while the memaccess settings are cleared. With
multiple copies of the host p2m, we could easily just swap in a table
for the violating vCPU where the permissions are clear, without
affecting any of the other vCPUs. This could be exercised by extending
the xen-access test tool!

Is this something you think would be within scope for the envisioned
use-case for this series?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.