[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 4/7] x86/vmx: add do_vmtrace_op



On Thu, Jun 18, 2020 at 03:09:57PM +0200, Michał Leszczyński wrote:
> ----- 18 cze 2020 o 14:51, Jan Beulich jbeulich@xxxxxxxx napisał(a):
> 
> > On 18.06.2020 13:55, Roger Pau Monné wrote:
> >> On Thu, Jun 18, 2020 at 01:01:39PM +0200, Michał Leszczyński wrote:
> >>> It was previously stated that:
> >>>
> >>>> PVH or HVM domain
> >>>> won't be able to use this interface since it has no way to request the
> >>>> mapping of a specific mfn into it's physmap.
> >>>
> >>> but however, taking LibVMI as an example:
> >>>
> >>> https://github.com/libvmi/libvmi/blob/c461e20ae88bc62c08c27f50fcead23c27a30f9e/libvmi/driver/xen/xen.c#L51
> >>>
> >>> An essential abstraction xen_get_memory() relies on 
> >>> xc_map_foreign_range().
> >>> Doesn't this mean that it's not usable from PVH or HVM domains, or did I 
> >>> got it
> >>> all wrong?
> >> 
> >> That was my fault, so the buffer mfns are assigned to Xen, and then
> >> the Xen domain ID is used to map those, which should work on both PV
> >> and HVM (or PVH).
> >> 
> >> I still think using XENMEM_acquire_resource might be better, but I
> >> would let others comment.
> > 
> > +1
> > 
> > Jan
> 
> 
> I'm trying to implement this right now. I've added some very simple code to 
> mm.c just for testing:
> 
> ---
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index e376fc7e8f..aaaefe6d23 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4624,6 +4624,26 @@ int arch_acquire_resource(struct domain *d, unsigned 
> int type,
>          }
>          break;
>      }
> +
> +    case XENMEM_resource_vmtrace_buf:
> +    {
> +        uint64_t output_base;
> +        mfn_t mfn;
> +        unsigned int i;
> +
> +        printk("vmtrace buf acquire\n");
> +        output_base = d->vcpu[id]->arch.hvm.vmx.ipt_state->output_base;
> +        mfn = mfn_x(output_base >> PAGE_SHIFT);
> +
> +        rc = 0;
> +        for ( i = 0; i < nr_frames; i++ )
> +        {
> +            __map_domain_page_global(mfn_to_page(mfn + i));

I don't think you need the __map_domain_page_global?

> +            mfn_list[i] = mfn + i;

I think you need mfn_add here, or else this won't build?

> +        }
> +
> +        break;
> +    }
>  #endif
> 
>      default:
> 
> ---
> 
> 
> and then in my "proctrace" tool I'm trying to acquire it like this:
> 
>     fres = xenforeignmemory_map_resource(
>         fmem, domid, XENMEM_resource_vmtrace_buf,
>         /* vcpu: */ 0, /* frame: */ 0, /* num_frames: */ 128, (void **)&buf,
>         PROT_READ, 0);
> 
> 
> ioctl fails with "Argument list too long". It works fine when I provide some 
> small number of frames (e.g. num_frames: 1 or 32), but doesn't work for any 
> larger quantity.
> 
> How should I proceed with this? The PT buffer could be large, even up to 4 GB.

I think adding a loop and hypercall continuation support could make
this work without having to expand the size of mfn_list and
gfn_list?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.