[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/p2m: drop p2m_access_t parameter from set_mmio_p2m_entry()

On 07.02.2020 10:52, Roger Pau Monné wrote:
> On Fri, Feb 07, 2020 at 09:08:15AM +0100, Jan Beulich wrote:
>> On 06.02.2020 16:20, Jan Beulich wrote:
>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>> @@ -3037,9 +3037,8 @@ static int vmx_alloc_vlapic_mapping(stru
>>>      share_xen_page_with_guest(pg, d, SHARE_rw);
>>>      d->arch.hvm.vmx.apic_access_mfn = mfn;
>>> -    return set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn,
>>> -                              PAGE_ORDER_4K,
>>> -                              p2m_get_hostp2m(d)->default_access);
>>> +    return set_mmio_p2m_entry(d, gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE), mfn,
>>> +                              PAGE_ORDER_4K);
>>>  }
>> Upon 2nd thought - does this really want to use default access?
>> Execute permission for this page looks a little suspicious.
>> Isn't it the case that this page doesn't (normally?) get
>> accessed at all, and instead its address serves as an indicator
>> to the CPU? (I even vaguely recall it having been considered to
>> consolidate this, to e.g. a single page per domain.) In which
>> case even p2m_access_n might be good enough?
> Hm, I think p2m_access_n is not enough, as that would trigger an EPT
> fault which has preference over the APIC access VM exit (see 29.4.1
> Priority of APIC-Access VM Exits).

Ah, yes, reading that text I agree. Having just a single such page
per domain would still seem possible, though. Although, if we meant
to support a guest moving the APIC base address, then we couldn't,

> I think setting it to p2m_access_rw should be enough, and we would get
> EPT faults when trying to execute from APIC page.

Then the other question is whether there's any use for introspection
to further limit permissions on this (kind of fake) page. Tamas?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.