[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 07/11] x86/altp2m: introduce p2m_ram_rw_ve type.
On 01/16/2015 09:52 AM, Tim Deegan wrote: > At 12:38 -0800 on 15 Jan (1421321902), Ed White wrote: >> On 01/15/2015 09:03 AM, Tim Deegan wrote: >>> At 13:26 -0800 on 09 Jan (1420806397), Ed White wrote: >>>> This is treated exactly like p2m_ram_rw, except that suppress_ve is not >>>> set in the EPTE. >>> >>> I don't think this is going to work -- you probably want to support >>> p2m_ram_ro at least, and maybe other types, but duplicating each of >>> them as a 'type foo with #VE' doesn't seem right. >>> >>> Since the default is to set the ignore-#ve flag everywhere, how about >>> having an operation to enable #ve for a frame that just clears that >>> bit, and then having all other updates to altp2m entries preserve it? >> >> I hear you, but #VE is only even relevant for the in-domain agent >> model, and as the only current user of that model we not only don't >> want #VE to work on other page types, we specifically want it to be >> prohibited. > > I see. I think it would be very useful if you could add some > documentation of the new feature, covering this sort of thing, as well > as the exact semantics of the hypercalls. > >> Can we do it this way, and then change it later if required? > > No thank you. It shouldn't be hard to do it the clean way from the > start. The problem with doing it the clean way is that I have to use EPTE bit 63 even on hardware that doesn't support it. That's not a problem hardware-wise, because, at least for Intel, bit 63 is don't care for non-#VE hardware. It does mean Xen can't use it for anything else though. If you look at the code in the current patch series, for non-#VE hardware I don't use that bit, the nested page fault handler decides whether to emulate #VE based on the p2m_type value. Ed _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |