[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10] x86/emulate: Send vm_event from emulate



On 16.09.2019 10:10, Alexandru Stefan ISAILA wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3224,6 +3224,14 @@ static enum hvm_translation_result __hvm_copy(
>              return HVMTRANS_bad_gfn_to_mfn;
>          }
>  
> +        if ( unlikely(v->arch.vm_event) &&
> +             v->arch.vm_event->send_event &&
> +             hvm_monitor_check_p2m(addr, gfn, pfec, npfec_kind_with_gla) )
> +        {
> +            put_page(page);
> +            return HVMTRANS_gfn_paged_out;

I'm sorry, but there is _still_ no comment next to this apparent
mis-use of HVMTRANS_gfn_paged_out.

> @@ -215,6 +217,79 @@ void hvm_monitor_interrupt(unsigned int vector, unsigned 
> int type,
>      monitor_traps(current, 1, &req);
>  }
>  
> +/*
> + * Send memory access vm_events based on pfec. Returns true if the event was
> + * sent and false for p2m_get_mem_access() error, no violation and event send
> + * error. Assumes the caller will check arch.vm_event->send_event.
> + *
> + * NOTE: p2m_get_mem_access() can fail if the entry was not found in the EPT
> + * (in which case access to it is unrestricted, so no violations can occur).
> + * In this cases it is fine to continue the emulation.
> + */

I think this part of the comment would better go ...

> +bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec,
> +                           uint16_t kind)
> +{
> +    xenmem_access_t access;
> +    vm_event_request_t req = {};
> +    paddr_t gpa = (gfn_to_gaddr(gfn) | (gla & ~PAGE_MASK));
> +
> +    ASSERT(current->arch.vm_event->send_event);
> +
> +    current->arch.vm_event->send_event = false;
> +
> +    if ( p2m_get_mem_access(current->domain, gfn, &access,
> +                            altp2m_vcpu_idx(current)) != 0 )
> +        return false;

... next to the call here (but the maintainers of the file would
have to judge in the end). That said, I continue to not understand
why a not found entry means unrestricted access. Isn't it
->default_access which controls what such a "virtual" entry would
permit?

> +    switch ( access )
> +    {
> +    case XENMEM_access_x:
> +    case XENMEM_access_rx:
> +        if ( pfec & PFEC_write_access )
> +            req.u.mem_access.flags = MEM_ACCESS_R | MEM_ACCESS_W;
> +        break;
> +
> +    case XENMEM_access_w:
> +    case XENMEM_access_rw:
> +        if ( pfec & PFEC_insn_fetch )
> +            req.u.mem_access.flags = MEM_ACCESS_X;
> +        break;
> +
> +    case XENMEM_access_r:
> +    case XENMEM_access_n:
> +        if ( pfec & PFEC_write_access )
> +            req.u.mem_access.flags |= MEM_ACCESS_R | MEM_ACCESS_W;
> +        if ( pfec & PFEC_insn_fetch )
> +            req.u.mem_access.flags |= MEM_ACCESS_X;
> +        break;
> +
> +    case XENMEM_access_wx:
> +    case XENMEM_access_rwx:
> +    case XENMEM_access_rx2rw:
> +    case XENMEM_access_n2rwx:
> +    case XENMEM_access_default:
> +        break;
> +    }
> +
> +    if ( !req.u.mem_access.flags )
> +        return false; /* no violation */
> +
> +    if ( kind == npfec_kind_with_gla )
> +        req.u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA |
> +                                  MEM_ACCESS_GLA_VALID;
> +    else if ( kind == npfec_kind_in_gpt )
> +        req.u.mem_access.flags |= MEM_ACCESS_FAULT_IN_GPT |
> +                                  MEM_ACCESS_GLA_VALID;
> +
> +
> +    req.reason = VM_EVENT_REASON_MEM_ACCESS;
> +    req.u.mem_access.gfn = gfn_x(gfn);
> +    req.u.mem_access.gla = gla;
> +    req.u.mem_access.offset = gpa & ~PAGE_MASK;
> +
> +    return monitor_traps(current, true, &req) >= 0;
> +}

There are quite a few uses of "current" in here - please consider
latching into a local variable named "curr".

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.