[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v17 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func



On 8/22/18 5:02 PM, Alexandru Isaila wrote:
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>
> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> ---
> Changes since V14:
>       - Move all free fields to the initializer
>       - Add blank line to before the return
>       - Move v->pause_flags check to the save_one function.
> ---
>  xen/arch/x86/hvm/hvm.c | 219 
> +++++++++++++++++++++++++------------------------
>  1 file changed, 113 insertions(+), 106 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index d90da9a..333c342 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -787,119 +787,126 @@ static int hvm_load_tsc_adjust(struct domain *d, 
> hvm_domain_context_t *h)
>  HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
>                            hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
>  
> -static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
> +static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
>  {
> -    struct vcpu *v;
> -    struct hvm_hw_cpu ctxt;
>      struct segment_register seg;
> +    struct hvm_hw_cpu ctxt = {
> +        .tsc = hvm_get_guest_tsc_fixed(v, 
> v->domain->arch.hvm_domain.sync_tsc),
> +        .msr_tsc_aux = hvm_msr_tsc_aux(v),
> +        .rax = v->arch.user_regs.rax,
> +        .rbx = v->arch.user_regs.rbx,
> +        .rcx = v->arch.user_regs.rcx,
> +        .rdx = v->arch.user_regs.rdx,
> +        .rbp = v->arch.user_regs.rbp,
> +        .rsi = v->arch.user_regs.rsi,
> +        .rdi = v->arch.user_regs.rdi,
> +        .rsp = v->arch.user_regs.rsp,
> +        .rip = v->arch.user_regs.rip,
> +        .rflags = v->arch.user_regs.rflags,
> +        .r8  = v->arch.user_regs.r8,
> +        .r9  = v->arch.user_regs.r9,
> +        .r10 = v->arch.user_regs.r10,
> +        .r11 = v->arch.user_regs.r11,
> +        .r12 = v->arch.user_regs.r12,
> +        .r13 = v->arch.user_regs.r13,
> +        .r14 = v->arch.user_regs.r14,
> +        .r15 = v->arch.user_regs.r15,
> +        .dr0 = v->arch.debugreg[0],
> +        .dr1 = v->arch.debugreg[1],
> +        .dr2 = v->arch.debugreg[2],
> +        .dr3 = v->arch.debugreg[3],
> +        .dr6 = v->arch.debugreg[6],
> +        .dr7 = v->arch.debugreg[7],
> +    };
>  
> -    for_each_vcpu ( d, v )
> +    /*
> +     * We don't need to save state for a vcpu that is down; the restore
> +     * code will leave it down if there is nothing saved.
> +     */
> +    if ( v->pause_flags & VPF_down )
> +        return 0;

Actually, we'd like to remove this if() if possible - even if the VCPU
is down, we should still be able to query it and receive whatever state
it is in, according to the Intel SDM, "9.1.1 Processor State After
Reset". Any objections to this?

Also, reading the comment and looking at the code, it appears that the
end of hvm_load_cpu_ctxt() actually wakes the VCPU up:

1152     v->arch.debugreg[7] = ctxt.dr7;
1153
1154     v->arch.vgc_flags = VGCF_online;
1155
1156     /* Auxiliary processors should be woken immediately. */
1157     v->is_initialised = 1;
1158     clear_bit(_VPF_down, &v->pause_flags);
1159     vcpu_wake(v);
1160
1161     return 0;
1162 }

which appears to be non-architectural behaviour if we've interpreted
correctly section 8.4 of Volume 3 of the SDM, as basically saying that
VCPUs should be awaken by IPIs (though this is outside the scope of this
patch).


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.