[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] vm_event: Record FS_BASE/GS_BASE during events





On Thu, Feb 11, 2016 at 2:11 PM, Tamas K Lengyel <tamas.k.lengyel@xxxxxxxxx> wrote:


On Thu, Feb 11, 2016 at 1:59 PM, Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx> wrote:
On 02/11/2016 10:38 PM, Tamas K Lengyel wrote:
>
>
> On Thu, Feb 11, 2016 at 1:13 PM, Razvan Cojocaru
> <rcojocaru@xxxxxxxxxxxxxxx <mailto:rcojocaru@xxxxxxxxxxxxxxx>> wrote:
>
>Â Â ÂOn 02/11/2016 10:04 PM, Andrew Cooper wrote:
>Â Â Â> On 11/02/16 20:00, Razvan Cojocaru wrote:
>Â Â Â>> On 02/11/2016 09:55 PM, Andrew Cooper wrote:
>Â Â Â>>> On 11/02/16 19:54, Razvan Cojocaru wrote:
>Â Â Â>>>> On 02/11/2016 09:51 PM, Tamas K Lengyel wrote:
>Â Â Â>>>>> While the public vm_event header specifies fs_base/gs_base as
>Â Â Âregisters that
>Â Â Â>>>>> should be recorded for each event, that hasn't actually been
>Â Â Âthe case. In
>Â Â Â>>>>> this patch we remedy the issue.
>Â Â Â>>>>>
>Â Â Â>>>>> Signed-off-by: Tamas K Lengyel <tlengyel@xxxxxxxxxxx
>Â Â Â<mailto:tlengyel@xxxxxxxxxxx>>
>Â Â Â>>>>> Cc: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx
>Â Â Â<mailto:rcojocaru@xxxxxxxxxxxxxxx>>
>Â Â Â>>>>> Cc: Keir Fraser <keir@xxxxxxx <mailto:keir@xxxxxxx>>
>Â Â Â>>>>> Cc: Jan Beulich <jbeulich@xxxxxxxx <mailto:jbeulich@xxxxxxxx>>
>Â Â Â>>>>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx
>Â Â Â<mailto:andrew.cooper3@xxxxxxxxxx>>
>Â Â Â>>>>> ---
>Â Â Â>>>>>Â xen/arch/x86/hvm/event.c | 9 ++++++++-
>Â Â Â>>>>>Â 1 file changed, 8 insertions(+), 1 deletion(-)
>Â Â Â>>>> Fair enough.
>Â Â Â>>>>
>Â Â Â>>>> Acked-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx
>Â Â Â<mailto:rcojocaru@xxxxxxxxxxxxxxx>>
>Â Â Â>>> Oops.
>Â Â Â>>>
>Â Â Â>>> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx
>Â Â Â<mailto:andrew.cooper3@xxxxxxxxxx>>
>Â Â Â>> This has actually been intentional, in that we've only needed those
>Â Â Â>> fields for EPT events, and thought that not filling what's not needed
>Â Â Â>> until it's needed would save a tiny bit of hypervisor processing
>Â Â Âtime.
>Â Â Â>> They are being filled in only for page fault events at the moment.
>Â Â Â>>
>Â Â Â>> I believe it's been discussed at the time. We still don't need those
>Â Â Â>> coming with the events that use hvm_event_fill_regs(), but if Tamas
>Â Â Â>> needs them then by all means.
>Â Â Â>
>Â Â Â> The public header file does suggest that all of vm_event_regs_x86 will
>  Â> be complete. Are there any other fields currently missing?
>
>Â Â ÂThere are. p2m_vm_event_fill_regs() fills everything in (in
>Â Â Âxen/arch/x86/mm/p2m.c). hvm_event_fill_regs() still does not, even after
>Â Â ÂTamas' patch.
>
>
> Ah, that makes sense. Yea, I would prefer if all registers would get
> filled in for all events so I'll just consolidate these two functions
> into one.

Right, but please be careful and test that you get correct values with
all events (page fault events + the others), I remember that for some
reason I needed to use different ways to get at the same values in
p2m_vm_event_fill_regs() and hvm_event_fill_regs().

For example, p2m_vm_event_fill_regs() does:

hvm_funcs.save_cpu_ctxt(curr, &ctxt);
req->data.regs.x86.cr0 = ctxt.cr0;

and hvm_event_fill_regs() does:

req->data.regs.x86.cr0 = curr->arch.hvm_vcpu.guest_cr[0];

I don't remember exactly why I had to do that at the time, but I do
recall it being necessary.

That sounds odd to me. As far as I can tell everything works just right with the other patch I just sent. I looked into what hvm_funcs.save_cpu_ctxt does on Intel and it calls vmx_save_vmcs_ctxt which calls vmx_vmcs_save. That has:


(continued)

ÂÂÂ c->cr0 = v->arch.hvm_vcpu.guest_cr[0];
ÂÂÂ c->cr2 = v->arch.hvm_vcpu.guest_cr[2];
ÂÂÂ c->cr3 = v->arch.hvm_vcpu.guest_cr[3];
ÂÂÂ c->cr4 = v->arch.hvm_vcpu.guest_cr[4];

So there shouldn't really be any difference here.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.