[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH linux v3 3/9] xen: introduce xen_vcpu_id mapping



Hi Vitaly,

On 2016/9/5 17:42, Vitaly Kuznetsov wrote:
> Julien Grall <julien.grall@xxxxxxx> writes:
>
>> Hi Vitaly,
>>
>> On 26/07/16 13:30, Vitaly Kuznetsov wrote:
>>> It may happen that Xen's and Linux's ideas of vCPU id diverge. In
>>> particular, when we crash on a secondary vCPU we may want to do kdump
>>> and unlike plain kexec where we do migrate_to_reboot_cpu() we try booting
>>> on the vCPU which crashed. This doesn't work very well for PVHVM guests as
>>> we have a number of hypercalls where we pass vCPU id as a parameter. These
>>> hypercalls either fail or do something unexpected. To solve the issue
>>> introduce percpu xen_vcpu_id mapping. ARM and PV guests get direct mapping
>>> for now. Boot CPU for PVHVM guest gets its id from CPUID. With secondary
>>> CPUs it is a bit more trickier. Currently, we initialize IPI vectors
>>> before these CPUs boot so we can't use CPUID. Use ACPI ids from MADT
>>> instead.
>>>
>>> Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
>>> ---
>>> Changes since v2:
>>> - Use uint32_t for xen_vcpu_id mapping [Julien Grall]
>>>
>>> Changes since v1:
>>> - Introduce xen_vcpu_nr() helper [David Vrabel]
>>> - Use ACPI ids instead of vLAPIC ids /2 [Andrew Cooper, Jan Beulich]
>>> ---
>>>  arch/arm/xen/enlighten.c | 10 ++++++++++
>>>  arch/x86/xen/enlighten.c | 23 ++++++++++++++++++++++-
>>>  include/xen/xen-ops.h    |  6 ++++++
>>>  3 files changed, 38 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
>>> index 75cd734..fe32267 100644
>>> --- a/arch/arm/xen/enlighten.c
>>> +++ b/arch/arm/xen/enlighten.c
>>> @@ -46,6 +46,10 @@ struct shared_info *HYPERVISOR_shared_info = (void 
>>> *)&xen_dummy_shared_info;
>>>  DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
>>>  static struct vcpu_info __percpu *xen_vcpu_info;
>>>
>>> +/* Linux <-> Xen vCPU id mapping */
>>> +DEFINE_PER_CPU(uint32_t, xen_vcpu_id) = U32_MAX;
>>> +EXPORT_PER_CPU_SYMBOL(xen_vcpu_id);
>>> +
>>>  /* These are unused until we support booting "pre-ballooned" */
>>>  unsigned long xen_released_pages;
>>>  struct xen_memory_region xen_extra_mem[XEN_EXTRA_MEM_MAX_REGIONS] 
>>> __initdata;
>>> @@ -179,6 +183,9 @@ static void xen_percpu_init(void)
>>>  pr_info("Xen: initializing cpu%d\n", cpu);
>>>  vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
>>>
>>> +/* Direct vCPU id mapping for ARM guests. */
>>> +per_cpu(xen_vcpu_id, cpu) = cpu;
>>> +
>>
>> We did some internal testing on ARM64 with the latest Linux kernel
>> (4.8-rc4) and noticed that this patch is breaking SMP support. Sorry
>> for noticing the issue that late.
>
> Sorry for the breakage :-(
>
>>
>> This function is called on the running CPU whilst some code (e.g
>> init_control_block in drivers/xen/events/events_fifo.c) is executed
>> whilst preparing the CPU on the boot CPU.
>>
>> So xen_vcpu_nr(cpu) will always return 0 in this case and
>> init_control_block will fail to execute.
>>
>
> I see,
>
> CPU_UP_PREPARE event happens before xen_starting_cpu() is called.
>
>
>> I am not sure how to fix. I guess we could setup per_cpu(xen_vcpu_id,
>> *) in xen_guest_init. Any opinions?
>
> As we're not doing kexec on ARM we can fix the immediate issue. I don't
> know much about ARM and unfortunatelly I don't have a setup to test but
> it seems there is no early_per_cpu* infrastructure for ARM so we may fix
> it with the following:
>
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index 3d2cef6..f193414 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -170,9 +170,6 @@ static int xen_starting_cpu(unsigned int cpu)
>         pr_info("Xen: initializing cpu%d\n", cpu);
>         vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
>
> -       /* Direct vCPU id mapping for ARM guests. */
> -       per_cpu(xen_vcpu_id, cpu) = cpu;
> -
>         info.mfn = virt_to_gfn(vcpup);
>         info.offset = xen_offset_in_page(vcpup);
>
> @@ -330,6 +327,7 @@ static int __init xen_guest_init(void)
>  {
>         struct xen_add_to_physmap xatp;
>         struct shared_info *shared_info_page = NULL;
> +       int cpu;
>
>         if (!xen_domain())
>                 return 0;
> @@ -380,7 +378,8 @@ static int __init xen_guest_init(void)
>                 return -ENOMEM;
>
>         /* Direct vCPU id mapping for ARM guests. */
> -       per_cpu(xen_vcpu_id, 0) = 0;
> +       for_each_possible_cpu(cpu)
> +               per_cpu(xen_vcpu_id, cpu) = cpu;
>
>         xen_auto_xlat_grant_frames.count = gnttab_max_grant_frames();
>         if (xen_xlate_map_ballooned_pages(&xen_auto_xlat_grant_frames.pfn,
>
> (not tested, if we can't use for_each_possible_cpu() that early we'll
> have to check against NR_CPUS instead).
>

I have tested this patch just now, it can work with the latest Linux
kernel and latest Xen hypervisor on my ARM platform:

[    0.299927] xen:events: Using FIFO-based ABI
[    0.304259] Xen: initializing cpu0
[    0.336402] EFI services will not be available.
[    0.388985] Detected PIPT I-cache on CPU1
[    0.389024] Xen: initializing cpu1
[    0.389036] CPU1: Booted secondary processor [411fd072]
[    0.421064] Detected PIPT I-cache on CPU2
[    0.421105] Xen: initializing cpu2
[    0.421119] CPU2: Booted secondary processor [411fd072]
[    0.453143] Detected PIPT I-cache on CPU3
[    0.453178] Xen: initializing cpu3
[    0.453190] CPU3: Booted secondary processor [411fd072]
[    0.485226] Detected PIPT I-cache on CPU4
[    0.485265] Xen: initializing cpu4
[    0.485278] CPU4: Booted secondary processor [411fd072]
[    0.517303] Detected PIPT I-cache on CPU5
[    0.517339] Xen: initializing cpu5
[    0.517351] CPU5: Booted secondary processor [411fd072]
[    0.549389] Detected PIPT I-cache on CPU6
[    0.549428] Xen: initializing cpu6
[    0.549442] CPU6: Booted secondary processor [411fd072]
[    0.581470] Detected PIPT I-cache on CPU7
[    0.581506] Xen: initializing cpu7
[    0.581518] CPU7: Booted secondary processor [411fd072]
[    0.581570] Brought up 8 CPUs
[    0.674360] SMP: Total of 8 processors activated.
[    0.679138] CPU features: detected feature: 32-bit EL0 Support
[    0.685040] CPU: All CPU(s) started at EL1

Regards,

> But unfortunatelly we'll have to get back to this in future. Turns out
> we need to know Xen's idea of vCPU id _before_ this vCPU starts
> executing code. On x86 we used ACPI_ID from MADT. Is there anything like
> it on ARM?
>
>>
>> [...]
>>
>>> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
>>> index 0f87db2..c833912 100644
>>> --- a/arch/x86/xen/enlighten.c
>>> +++ b/arch/x86/xen/enlighten.c
>>> @@ -1795,6 +1806,12 @@ static void __init init_hvm_pv_info(void)
>>>
>>>  xen_setup_features();
>>>
>>> +cpuid(base + 4, &eax, &ebx, &ecx, &edx);
>>> +if (eax & XEN_HVM_CPUID_VCPU_ID_PRESENT)
>>> +this_cpu_write(xen_vcpu_id, ebx);
>>> +else
>>> +this_cpu_write(xen_vcpu_id, smp_processor_id());
>>> +
>>>  pv_info.name = "Xen HVM";
>>>
>>>  xen_domain_type = XEN_HVM_DOMAIN;
>>> @@ -1806,6 +1823,10 @@ static int xen_hvm_cpu_notify(struct notifier_block 
>>> *self, unsigned long action,
>>>  int cpu = (long)hcpu;
>>>  switch (action) {
>>>  case CPU_UP_PREPARE:
>>> +if (cpu_acpi_id(cpu) != U32_MAX)
>>> +per_cpu(xen_vcpu_id, cpu) = cpu_acpi_id(cpu);
>>> +else
>>> +per_cpu(xen_vcpu_id, cpu) = cpu;
>>
>> I have not tried myself. But looking at the code, the notifiers
>> xen_hvm_cpu_notifier and evtchn_fifo_cpu_notifier have the same
>> priority. So what does prevent the code above to be executed after the
>> event channel callback?
>
> Nothing, actually, but xen_hvm_guest_init() happens early and we always
> get these notifiers in the right order (and, reading Boris' reply, we're
> gonna make it explicit in future).
>
>>
>>>  xen_vcpu_setup(cpu);
>>>  if (xen_have_vector_callback) {
>>>  if (xen_feature(XENFEAT_hvm_safe_pvclock))
>>> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
>>> index 86abe07..648ce814 100644
>>> --- a/include/xen/xen-ops.h
>>> +++ b/include/xen/xen-ops.h
>>> @@ -9,6 +9,12 @@
>>>
>>>  DECLARE_PER_CPU(struct vcpu_info *, xen_vcpu);
>>>
>>> +DECLARE_PER_CPU(uint32_t, xen_vcpu_id);
>>> +static inline int xen_vcpu_nr(int cpu)
>>> +{
>>> +return per_cpu(xen_vcpu_id, cpu);
>>> +}
>>> +
>>>  void xen_arch_pre_suspend(void);
>>>  void xen_arch_post_suspend(int suspend_cancelled);
>>
>> Regards,
>


--
Regards,
Wei Chen
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.