[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] x86/pagewalk: Fix pagewalk's handling of instruction fetches



>>> On 01.06.17 at 12:19, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 29/05/17 10:15, Jan Beulich wrote:
>>>>> On 29.05.17 at 11:03, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 29/05/2017 09:58, Jan Beulich wrote:
>>>>>>> On 26.05.17 at 19:03, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>> --- a/xen/arch/x86/mm/guest_walk.c
>>>>> +++ b/xen/arch/x86/mm/guest_walk.c
>>>>> @@ -114,22 +114,18 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain 
> *p2m,
>>>>>      ASSERT(!(walk & PFEC_implicit) ||
>>>>>             !(walk & (PFEC_insn_fetch | PFEC_user_mode)));
>>>>>  
>>>>> -    /*
>>>>> -     * PFEC_insn_fetch is only used as an input to pagetable walking if 
>>>>> NX 
> or
>>>>> -     * SMEP are enabled.  Otherwise, instruction fetches are 
> indistinguishable
>>>>> -     * from data reads.
>>>>> -     *
>>>>> -     * This property can be demonstrated on real hardware by having NX 
>>>>> and
>>>>> -     * SMEP inactive, but SMAP active, and observing that EFLAGS.AC 
> determines
>>>>> -     * whether a pagefault occures for supervisor execution on user 
> mappings.
>>>>> -     */
>>>>> -    if ( !(guest_nx_enabled(v) || guest_smep_enabled(v)) )
>>>>> -        walk &= ~PFEC_insn_fetch;
>>>>> -
>>>>>      perfc_incr(guest_walk);
>>>>>      memset(gw, 0, sizeof(*gw));
>>>>>      gw->va = va;
>>>>> -    gw->pfec = walk & (PFEC_insn_fetch | PFEC_user_mode | 
>>>>> PFEC_write_access);
>>>>> +    gw->pfec = walk & (PFEC_user_mode | PFEC_write_access);
>>>>> +
>>>>> +    /*
>>>>> +     * PFEC_insn_fetch is only reported if NX or SMEP are enabled.  
> Hardware
>>>>> +     * still distingueses instruction fetches during determination of 
> access
>>>>> +     * rights.
>>>>> +     */
>>>>> +    if ( guest_nx_enabled(v) || guest_smep_enabled(v) )
>>>>> +        gw->pfec |= (walk & PFEC_insn_fetch);
>>>>>  
>>>>>  #if GUEST_PAGING_LEVELS >= 3 /* PAE or 64... */
>>>>>  #if GUEST_PAGING_LEVELS >= 4 /* 64-bit only... */
>>>> Don't you another adjustment to
>>>>
>>>>     if ( (walk & PFEC_insn_fetch) && (ar & _PAGE_NX_BIT) )
>>>>         /* Requested an instruction fetch and found NX? Fail. */
>>>>         goto out;
>>>>
>>>> I can't see anything that would keep _PAGE_NX_BIT out of
>>>> ar if NX is not enabled.
>>> _PAGE_NX_BIT is reserved if NX is not enabled, and is accounted for in
>>> guest_rsvd_bits() in guest_pt.h, and we never hit the access rights logic.
>> Ah, right. But perhaps worth having a respective ASSERT()
>> here, at once serving as documentation?
> 
> I could, but it would feel be out of place.  NX being incorrectly set is
> a translation failure, and by definition, the translation needs to have
> succeeded before permissions get considered.
> 
> Would this clarification be acceptable?
> 
> index 5c6a85b..6d6b454 100644
> --- a/xen/arch/x86/mm/guest_walk.c
> +++ b/xen/arch/x86/mm/guest_walk.c
> @@ -360,8 +360,9 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>      gw->pfec |= PFEC_page_present;
>  
>      /*
> -     * The pagetable walk has returned a successful translation.  Now check
> -     * access rights to see whether the access should succeed.
> +     * The pagetable walk has returned a successful translation (i.e. All
> +     * PTEs are present and have no reserved bits set).  Now check access
> +     * rights to see whether the access should succeed.
>       */

While this perhaps is a worthwhile addition, my original request
really was to make more visible around the place where it matters
that the NX bit is part of the reserved ones when NX is off. Hence
I'm not sure the comment change is worthwhile, and if you dislike
adding the suggested ASSERT() I won't the patch be left as is.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.