[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 4/7] x86/vmx: add do_vmtrace_op



----- 22 cze 2020 o 17:22, Jan Beulich jbeulich@xxxxxxxx napisał(a):

> On 22.06.2020 16:35, Michał Leszczyński wrote:
>> ----- 22 cze 2020 o 15:25, Jan Beulich jbeulich@xxxxxxxx napisał(a):
>>> On 19.06.2020 01:41, Michał Leszczyński wrote:
>>>> +
>>>> +    domain_pause(d);
>>>
>>> Who's the intended caller of this interface? You making it a hvm-op
>>> suggests the guest may itself call this. But of course a guest
>>> can't pause itself. If this is supposed to be a tools-only interface,
>>> then you should frame it suitably in the public header and of course
>>> you need to enforce this here (which would e.g. mean you shouldn't
>>> use rcu_lock_domain_by_any_id()).
>>>
>> 
>> What should I use instead of rcu_lock_domain_by_and_id()?
> 
> Please take a look at the header where its declaration lives. It's
> admittedly not the usual thing in Xen, but there are even comments
> describing the differences between the four related by-id functions.
> I guess rcu_lock_live_remote_domain_by_id() is the one you want to
> use, despite being puzzled by there being surprisingly little uses
> elsewhere.
> 

Ok, I will correct this.

>>> Also please take a look at hvm/ioreq.c, which makes quite a bit of
>>> use of domain_pause(). In particular I think you want to acquire
>>> the lock only after having paused the domain.
>> 
>> This domain_pause() will be changed to vcpu_pause().
> 
> And you understand that my comment then still applies?

If you mean that we should first call vcpu_pause()
and then acquire spinlock, then yes, this will be corrected in v3.

>>> Is any of what you do in this switch() actually legitimate without
>>> hvm_set_vmtrace_pt_size() having got called for the guest? From
>>> remarks elsewhere I imply you expect the param that you currently
>>> use to be set upon domain creation time, but at the very least the
>>> potentially big buffer should imo not get allocated up front, but
>>> only when tracing is to actually be enabled.
>> 
>> Wait... so you want to allocate these buffers in runtime?
>> Previously we were talking that there is too much runtime logic
>> and these enable/disable hypercalls should be stripped to absolute
>> minimum.
> 
> Basic arrangements can be made at domain creation time. I don't
> think though that it would be a good use of memory if you
> allocated perhaps many gigabytes of memory just for possibly
> wanting to enable tracing on a guest.
> 

>From our previous conversations I thought that you want to have
as much logic moved to the domain creation as possible.

Thus, a parameter "vmtrace_pt_size" was introduced. By default it's
zero (= disabled), if you set it to a non-zero value, then trace buffers
of given size will be allocated for the domain and you have possibility
to use ipt_enable/ipt_disable at any moment.

This way the runtime logic is as thin as possible. I assume user knows
in advance whether he/she would want to use external monitoring with IPT
or not.

It's also not "many gigabytes". In most use cases a buffer of 16/32/64 MB
would suffice, I think.

If we want to fall back to the scenario where the trace buffer is
allocated dynamically, then we basically get back to patch v1
implementation.

>>>> +struct xen_hvm_vmtrace_op {
>>>> +    /* IN variable */
>>>> +    uint32_t version;   /* HVMOP_VMTRACE_INTERFACE_VERSION */
>>>> +    uint32_t cmd;
>>>> +/* Enable/disable external vmtrace for given domain */
>>>> +#define HVMOP_vmtrace_ipt_enable      1
>>>> +#define HVMOP_vmtrace_ipt_disable     2
>>>> +#define HVMOP_vmtrace_ipt_get_offset  3
>>>> +    domid_t domain;
>>>> +    uint32_t vcpu;
>>>> +    uint64_t size;
>>>> +
>>>> +    /* OUT variable */
>>>> +    uint64_t offset;
>>>
>>> If this is to be a tools-only interface, please use uint64_aligned_t.
>>>
>> 
>> This type is not defined within hvm_op.h header. What should I do about it?
> 
> It gets defined by xen.h, so should be available here. Its
> definitions live in a
> 
> #if defined(__XEN__) || defined(__XEN_TOOLS__)
> 
> section, which is what I did recommend to put your interface in
> as well. Unless you want this to be exposed to the guest itself,
> at which point further constraints would arise.
> 
>>> You also want to add an entry to xen/include/xlat.lst and use the
>>> resulting macro to prove that the struct layout is the same for
>>> native and compat callers.
>> 
>> Could you tell a little bit more about this? What are "native" and
>> "compat" callers and what is the purpose of this file?
> 
> A native caller is one whose bitness matches Xen's, i.e. for x86
> a guest running in 64-bit mode. A compat guest is one running in
> 32-bit mode. Interface structure layout, at least for historical
> reasons, can differ between the two. If it does, these
> structures need translation. In the case here the layouts look
> to match, which we still want to be verified at build time. If
> you add a suitable line to xlat.lst starting with a ?, a macro
> will be generated that you can then invoke somewhere near the
> code that actually handles this sub-hypercall. See e.g. the top
> of xen/common/hypfs.c for a relatively recent addition of such.
> 

Thanks for this explanation, I will try to address this.


Best regards,
Michał Leszczyński
CERT Polska



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.