[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH RFC 07/10] domain: map/unmap GADDR based shared guest areas
On 18/01/2023 9:55 am, Jan Beulich wrote: > On 17.01.2023 23:04, Andrew Cooper wrote: >> On 19/10/2022 8:43 am, Jan Beulich wrote: >>> The registration by virtual/linear address has downsides: At least on >>> x86 the access is expensive for HVM/PVH domains. Furthermore for 64-bit >>> PV domains the areas are inaccessible (and hence cannot be updated by >>> Xen) when in guest-user mode. >> They're also inaccessible in HVM guests (x86 and ARM) when Meltdown >> mitigations are in place. > I've added this explicitly, but ... > >> And lets not get started on the multitude of layering violations that is >> guest_memory_policy() for nested virt. In fact, prohibiting any form of >> map-by-va is a perquisite to any rational attempt to make nested virt work. >> >> (In fact, that infrastructure needs purging before any other >> architecture pick up stubs too.) >> >> They're also inaccessible in general because no architecture has >> hypervisor privilege in a normal user/supervisor split, and you don't >> know whether the mapping is over supervisor or user mapping, and >> settings like SMAP/PAN can cause the pagewalk to fail even when the >> mapping is in place. > ... I'm now merely saying that there are yet more reasons, rather than > trying to enumerate them all. That's fine. I just wanted to point out that its far more reasons that were given the first time around. >>> In preparation of the introduction of new vCPU operations allowing to >>> register the respective areas (one of the two is x86-specific) by >>> guest-physical address, flesh out the map/unmap functions. >>> >>> Noteworthy differences from map_vcpu_info(): >>> - areas can be registered more than once (and de-registered), >> When register by GFN is available, there is never a good reason to the >> same area twice. > Why not? Why shouldn't different entities be permitted to register their > areas, one after the other? This at the very least requires a way to > de-register. Because it's useless and extra complexity. From the point of view of any guest, its an MMIO(ish) window that Xen happens to update the content of. You don't get systems where you can ask hardware for e.g. "another copy of the HPET at mfn $foo please". >> The guest maps one MMIO-like region, and then constructs all the regular >> virtual addresses mapping it (or not) that it wants. >> >> This API is new, so we can enforce sane behaviour from the outset. In >> particular, it will help with ... >> >>> - remote vCPU-s are paused rather than checked for being down (which in >>> principle can change right after the check), >>> - the domain lock is taken for a much smaller region. >>> >>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> >>> --- >>> RFC: By using global domain page mappings the demand on the underlying >>> VA range may increase significantly. I did consider to use per- >>> domain mappings instead, but they exist for x86 only. Of course we >>> could have arch_{,un}map_guest_area() aliasing global domain page >>> mapping functions on Arm and using per-domain mappings on x86. Yet >>> then again map_vcpu_info() doesn't do so either (albeit that's >>> likely to be converted subsequently to use map_vcpu_area() anyway). >> ... this by providing a bound on the amount of vmap() space can be consumed. > I'm afraid I don't understand. When re-registering a different area, the > earlier one will be unmapped. The consumption of vmap space cannot grow > (or else we'd have a resource leak and hence an XSA). In which case you mean "can be re-registered elsewhere". More specifically, the area can be moved, and isn't a singleton operation like map_vcpu_info was. The wording as presented firmly suggests the presence of an XSA. >>> RFC: In map_guest_area() I'm not checking the P2M type, instead - just >>> like map_vcpu_info() - solely relying on the type ref acquisition. >>> Checking for p2m_ram_rw alone would be wrong, as at least >>> p2m_ram_logdirty ought to also be okay to use here (and in similar >>> cases, e.g. in Argo's find_ring_mfn()). p2m_is_pageable() could be >>> used here (like altp2m_vcpu_enable_ve() does) as well as in >>> map_vcpu_info(), yet then again the P2M type is stale by the time >>> it is being looked at anyway without the P2M lock held. >> Again, another error caused by Xen not knowing the guest physical >> address layout. These mappings should be restricted to just RAM regions >> and I think we want to enforce that right from the outset. > Meaning what exactly in terms of action for me to take? As said, checking > the P2M type is pointless. So without you being more explicit, all I can > take your reply for is merely a comment, with no action on my part (not > even to remove this RFC remark). There will become a point where it will need to become prohibited to issue this against something which isn't p2m_type_ram. If we had a sane idea of the guest physmap, I'd go as far as saying E820_RAM, but that's clearly not feasible yet. Even now, absolutely nothing good can possibly come of e.g. trying to overlay it on the grant table, or a grant mapping. ram || logdirty ought to exclude most cases we care about the guest (not) putting the mapping. ~Andrew
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |