[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] VMX: use a single, global APIC access page
On 11.02.2021 09:45, Roger Pau Monné wrote: > On Wed, Feb 10, 2021 at 05:48:26PM +0100, Jan Beulich wrote: >> I did further consider not allocating any real page at all, but just >> using the address of some unpopulated space (which would require >> announcing this page as reserved to Dom0, so it wouldn't put any PCI >> MMIO BARs there). But I thought this would be too controversial, because >> of the possible risks associated with this. > > No, Xen is not capable of allocating a suitable unpopulated page IMO, > so let's not go down that route. Wasting one RAM page seems perfectly > fine to me. Why would Xen not be able to, in principle? It may be difficult, but there may also be pages we could declare we know we can use for this purpose. >> @@ -3216,53 +3211,28 @@ static int vmx_alloc_vlapic_mapping(stru >> if ( !cpu_has_vmx_virtualize_apic_accesses ) >> return 0; >> >> - pg = alloc_domheap_page(d, MEMF_no_refcount); >> + pg = alloc_domheap_page(NULL, 0); >> if ( !pg ) >> return -ENOMEM; >> >> - if ( !get_page_and_type(pg, d, PGT_writable_page) ) >> - { >> - /* >> - * The domain can't possibly know about this page yet, so failure >> - * here is a clear indication of something fishy going on. >> - */ >> - domain_crash(d); >> - return -ENODATA; >> - } >> - >> mfn = page_to_mfn(pg); >> clear_domain_page(mfn); >> - d->arch.hvm.vmx.apic_access_mfn = mfn; >> + apic_access_mfn = mfn; >> >> - return set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE), mfn, >> - PAGE_ORDER_4K); >> -} >> - >> -static void vmx_free_vlapic_mapping(struct domain *d) >> -{ >> - mfn_t mfn = d->arch.hvm.vmx.apic_access_mfn; >> - >> - d->arch.hvm.vmx.apic_access_mfn = _mfn(0); >> - if ( !mfn_eq(mfn, _mfn(0)) ) >> - { >> - struct page_info *pg = mfn_to_page(mfn); >> - >> - put_page_alloc_ref(pg); >> - put_page_and_type(pg); >> - } >> + return 0; >> } >> >> static void vmx_install_vlapic_mapping(struct vcpu *v) >> { >> paddr_t virt_page_ma, apic_page_ma; >> >> - if ( mfn_eq(v->domain->arch.hvm.vmx.apic_access_mfn, _mfn(0)) ) >> + if ( mfn_eq(apic_access_mfn, _mfn(0)) ) > > You should check if the domain has a vlapic I think, since now > apic_access_mfn is global and will be set regardless of whether the > domain has vlapic enabled or not. > > Previously apic_access_mfn was only allocated if the domain had vlapic > enabled. Oh, indeed - thanks for spotting. And also in domain_creation_finished(). >> return; >> >> ASSERT(cpu_has_vmx_virtualize_apic_accesses); >> >> virt_page_ma = page_to_maddr(vcpu_vlapic(v)->regs_page); >> - apic_page_ma = mfn_to_maddr(v->domain->arch.hvm.vmx.apic_access_mfn); >> + apic_page_ma = mfn_to_maddr(apic_access_mfn); >> >> vmx_vmcs_enter(v); >> __vmwrite(VIRTUAL_APIC_PAGE_ADDR, virt_page_ma); > > I would consider setting up the vmcs and adding the page to the p2m in > the same function, and likely call it from vlapic_init. We could have > a domain_setup_apic in hvm_function_table that takes care of all this. Well, I'd prefer to do this just once per domain without needing to special case this on e.g. vCPU 0. >> --- a/xen/include/asm-x86/p2m.h >> +++ b/xen/include/asm-x86/p2m.h >> @@ -935,6 +935,9 @@ static inline unsigned int p2m_get_iommu >> flags = IOMMUF_readable; >> if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) ) >> flags |= IOMMUF_writable; >> + /* VMX'es APIC access page is global and hence has no owner. */ >> + if ( mfn_valid(mfn) && !page_get_owner(mfn_to_page(mfn)) ) >> + flags = 0; > > Is it fine to have this page accessible to devices if the page tables > are shared between the CPU and the IOMMU? No, it's not, but what do you do? As said elsewhere, devices gaining more access than is helpful is the price we pay for being able to share page tables. But ... > Is it possible for devices to write to it? ... thinking about it - they would be able to access it only when interrupt remapping is off. Otherwise the entire range 0xFEExxxxx gets treated differently altogether by the IOMMU, and is not subject to DMA remapping. IOW it shouldn't even matter what flags we put in there (and I'd be far less concerned about the no-intremap case). What this change matters for then is only to avoid an unnecessary call to iommu_map() (and, for small enough domain, extra intermediate page tables to be allocated). But let me really split this off of this patch. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |