[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 24/38] arm/p2m: Make p2m_mem_access_check ready for altp2m



Hi Julien,


On 09/12/2016 11:02 AM, Julien Grall wrote:
> Hello Sergej,
>
> On 16/08/2016 23:17, Sergej Proskurin wrote:
>> This commit extends the function "p2m_mem_access_check" and
>> "p2m_mem_access_check_and_get_page" to consider altp2m. The function
>> "p2m_mem_access_check_and_get_page" needs to translate the gva upon the
>> hostp2m's vttbr, as it contains all valid mappings while the currently
>> active altp2m view might not have the required gva mapping yet.
>>
>> Also, the new implementation fills the request buffer to hold
>> altp2m-related information.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@xxxxxxxxxxxxx>
>> ---
>> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
>> Cc: Julien Grall <julien.grall@xxxxxxx>
>> ---
>> v3: Extended the function "p2m_mem_access_check_and_get_page" to
>>     consider altp2m. Similar to "get_page_from_gva", the function
>>     "p2m_mem_access_check_and_get_page" needs to translate the gva upon
>>     the hostp2m's vttbr. Although, the function "gva_to_ipa" (called in
>>     "p2m_mem_access_check_and_get_page") performs a stage 1 table walk,
>>     it will access page tables residing in memory. Accesses to this
>>     memory are controlled by the underlying 2nd stage translation table
>>     and hence require the original mappings of the hostp2m.
>> ---
>>  xen/arch/arm/p2m.c | 43 +++++++++++++++++++++++++++++++++++++++----
>>  1 file changed, 39 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 5819ae0..ed9e0f0 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -14,6 +14,7 @@
>>  #include <asm/hardirq.h>
>>  #include <asm/page.h>
>>
>> +#include <asm/vm_event.h>
>>  #include <asm/altp2m.h>
>>
>>  #ifdef CONFIG_ARM_64
>> @@ -1479,9 +1480,32 @@ p2m_mem_access_check_and_get_page(struct vcpu
>> *v, vaddr_t gva, unsigned long fla
>>      xenmem_access_t xma;
>>      p2m_type_t t;
>>      struct page_info *page = NULL;
>> -    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
>> +    struct domain *d = v->domain;
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +    /*
>> +     * If altp2m is active, we need to translate the gva upon the
>> hostp2m's
>> +     * vttbr, as it contains all valid mappings while the currently
>> active
>> +     * altp2m view might not have the required gva mapping yet.
>> Although, the
>> +     * function gva_to_ipa performs a stage 1 table walk, it will
>> access page
>> +     * tables residing in memory. Accesses to this memory are
>> controlled by the
>> +     * underlying 2nd stage translation table and hence require the
>> original
>> +     * mappings of the hostp2m.
>
> As I already mentioned a few times now, this function is broken and
> needs to be fixed before anymore change in it.
>
> The underlying memory of stage-1 page table may have been restricted
> and therefore hardware page table walk (gva_to_ipa) may fail.
>

Based on our previous discussion I believed that it would be enough to
temporary change the VTTBR to the one of the host's p2m as it is done in
the implementation below. Your argument was that (without changing the
VTTBR) it might come to an issue during the process address translation
as accesses to the memory view in the altp2m tables might be restricted.

Would it not be sufficient to temporary switch to the VTTBR of the
host's p2m (similarly as I did in the the following implementation)? In
this way, as far as I understand, changes in the active altp2m view
should not affect the process of address translation.

>> +     */
>> +    if ( unlikely(altp2m_active(d)) )
>> +    {
>> +        unsigned long flags = 0;
>> +        uint64_t ovttbr = READ_SYSREG64(VTTBR_EL2);
>> +
>> +        p2m_switch_vttbr_and_get_flags(ovttbr, p2m->vttbr, flags);
>> +
>> +        rc = gva_to_ipa(gva, &ipa, flag);
>> +
>> +        p2m_restore_vttbr_and_set_flags(ovttbr, flags);
>> +    }
>> +    else
>> +        rc = gva_to_ipa(gva, &ipa, flag);
>>
>> -    rc = gva_to_ipa(gva, &ipa, flag);
>>      if ( rc < 0 )
>>          goto err;
>>
>> @@ -1698,13 +1722,16 @@ bool_t p2m_mem_access_check(paddr_t gpa,
>> vaddr_t gla, const struct npfec npfec)
>>      xenmem_access_t xma;
>>      vm_event_request_t *req;
>>      struct vcpu *v = current;
>> -    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
>> +    struct domain *d = v->domain;
>> +    struct p2m_domain *p2m = p2m_get_active_p2m(v);
>>
>>      /* Mem_access is not in use. */
>>      if ( !p2m->mem_access_enabled )
>>          return true;
>>
>> -    rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
>> +    p2m_read_lock(p2m);
>> +    rc = __p2m_get_mem_access(p2m, _gfn(paddr_to_pfn(gpa)), &xma);
>> +    p2m_read_unlock(p2m);
>>      if ( rc )
>>          return true;
>>
>> @@ -1810,6 +1837,14 @@ bool_t p2m_mem_access_check(paddr_t gpa,
>> vaddr_t gla, const struct npfec npfec)
>>          req->u.mem_access.flags |= npfec.insn_fetch     ?
>> MEM_ACCESS_X : 0;
>>          req->vcpu_id = v->vcpu_id;
>>
>> +        vm_event_fill_regs(req);
>
> I don't think this change belongs to this patch.
>

Thanks.

>> +
>> +        if ( unlikely(altp2m_active(d)) )
>> +        {
>> +            req->flags |= VM_EVENT_FLAG_ALTERNATE_P2M;
>> +            req->altp2m_idx = altp2m_vcpu(v).p2midx;
>> +        }
>> +
>>          mem_access_send_req(v->domain, req);
>>          xfree(req);
>>      }
>>

Cheers,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.