[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 0/1] Introduce VCPUOP_reset_vcpu_info



>>> On 06.08.14 at 15:08, <vkuznets@xxxxxxxxxx> wrote:
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -183,8 +183,6 @@ static void xen_vcpu_setup(int cpu)
>          * This path is called twice on PVHVM - first during bootup via
>          * smp_init -> xen_hvm_cpu_notify, and then if the VCPU is being
>          * hotplugged: cpu_up -> xen_hvm_cpu_notify.
> -        * As we can only do the VCPUOP_register_vcpu_info once lets
> -        * not over-write its result.
>          *
>          * For PV it is called during restore (xen_vcpu_restore) and bootup
>          * (xen_setup_vcpu_info_placement). The hotplug mechanism does not
> @@ -207,14 +205,23 @@ static void xen_vcpu_setup(int cpu)
>         info.mfn = arbitrary_virt_to_mfn(vcpup);
>         info.offset = offset_in_page(vcpup);
>  
> +       /*
> +        * Call VCPUOP_reset_vcpu_info before VCPUOP_register_vcpu_info, this
> +        * is required if we boot after kexec.
> +        */
> +
> +       if (cpu != 0) {
> +               err = HYPERVISOR_vcpu_op(VCPUOP_reset_vcpu_info, cpu, NULL);
> +               if (err)
> +                       pr_warn("VCPUOP_reset_vcpu_info for CPU%d failed: 
> %d\n",
> +                               cpu, err);
> +       }

Just for my understanding of why exactly you need the new operation:
Why is this being done here, when you already do the reset in the
cpu-die/shutdown paths? And why not for CPU 0?

Furthermore, what is the state of vCPU-s beyond 31 going to be after
they got their vCPU info reset? They won't have any other area as
fallback. Yet I don't think you can now and forever guarantee that
native_cpu_die() won't do anything requiring that structure.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.