[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Question about apic ipi interface



On Thu, May 23, 2013 at 11:24:56AM +0200, Stefan Bader wrote:
> On 22.05.2013 21:40, Konrad Rzeszutek Wilk wrote:
> > On Wed, May 08, 2013 at 06:26:40PM +0200, Stefan Bader wrote:
> >> On 23.04.2013 12:07, Stefan Bader wrote:
> >>> I was looking at some older patch and there is one thing I do not 
> >>> understand.
> >>>
> >>> commit f447d56d36af18c5104ff29dcb1327c0c0ac3634
> >>>     xen: implement apic ipi interface
> >>>
> >>> Specifically there the implementation of xen_send_IPI_mask_allbutself().
> >>>
> >>> void xen_send_IPI_mask_allbutself(const struct cpumask *mask,
> >>>                                 int vector)
> >>> {
> >>>         unsigned cpu;
> >>>         unsigned int this_cpu = smp_processor_id();
> >>>
> >>>         if (!(num_online_cpus() > 1))
> >>>                 return;
> >>>
> >>>         for_each_cpu_and(cpu, mask, cpu_online_mask) {
> >>>                 if (this_cpu == cpu)
> >>>                         continue;
> >>>
> >>>                 xen_smp_send_call_function_single_ipi(cpu);
> >>>         }
> >>> }
> >>>
> >>> Why is this using xen_smp_send_call_function_single_ipi()? This dumps the
> >>> supplied vector and always uses XEN_CALL_FUNCTION_SINGLE_VECTOR. In 
> >>> contrast the
> >>> xen_send_IPI_all() and xen_send_IPI_self() keep the (mapped) vector.
> >>>
> >>> Mildly wondering about whether call function would need special casing 
> >>> (just
> >>> because xen_smp_send_call_function_ipi() is special). But I don't have 
> >>> the big
> >>> picture there.
> >>>
> >>
> >> This never got really answered, so lets try this: Does the following patch 
> >> seem
> >> to make sense? I know, it has not caused any obvious regressions but at 
> >> least
> >> this would look more in agreement with the other code. It has not blown up 
> >> on a
> >> normal boot either.
> >> Ben, is there a simple way that I would trigger the problem you had?
> >>
> >> -Stefan
> >>
> >>
> >> From e13703426f367c618f2984d376289b197a8c0402 Mon Sep 17 00:00:00 2001
> >> From: Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
> >> Date: Wed, 8 May 2013 16:37:35 +0200
> >> Subject: [PATCH] xen: Clean up apic ipi interface
> >>
> >> Commit f447d56d36af18c5104ff29dcb1327c0c0ac3634 introduced the
> >> implementation of the PV apic ipi interface. But there were some
> >> odd things (it seems none of which cause really any issue but
> >> maybe they should be cleaned up anyway):
> >>  - xen_send_IPI_mask_allbutself (and by that xen_send_IPI_allbutself)
> >>    ignore the passed in vector and only use the CALL_FUNCTION_SINGLE
> >>    vector. While xen_send_IPI_all and xen_send_IPI_mask use the vector.
> >>  - physflat_send_IPI_allbutself is declared unnecessarily. It is never
> >>    used.
> >>
> >> This patch tries to clean up those things.
> >>
> >> Signed-off-by: Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
> > 
> > Looks very similar to 
> > 
> > https://patchwork.kernel.org/patch/2414311/
> > 
> > So two people pointing out the same thing. 
> 
> Yeah, from this discussion and further looking into it I am relatively sure 
> this
> has no visible effect either way because there currently is no user of the 
> "odd"
> implementations.

<nods> OK, let me commit yours and also add a comment about Zhenzhong's.

> 
> >> ---
> >>  arch/x86/xen/smp.c |   10 ++++------
> >>  arch/x86/xen/smp.h |    1 -
> >>  2 files changed, 4 insertions(+), 7 deletions(-)
> >> diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
> >> index 8ff3799..fb44426 100644
> >> --- a/arch/x86/xen/smp.c
> >> +++ b/arch/x86/xen/smp.c
> >> @@ -576,24 +576,22 @@ void xen_send_IPI_mask_allbutself(const struct 
> >> cpumask *mask,
> >>  {
> >>         unsigned cpu;
> >>         unsigned int this_cpu = smp_processor_id();
> >> +       int xen_vector = xen_map_vector(vector);
> >>
> >> -       if (!(num_online_cpus() > 1))
> >> +       if (!(num_online_cpus() > 1) || (xen_vector < 0))
> >>                 return;
> >>
> >>         for_each_cpu_and(cpu, mask, cpu_online_mask) {
> >>                 if (this_cpu == cpu)
> >>                         continue;
> >>
> >> -               xen_smp_send_call_function_single_ipi(cpu);
> >> +               xen_send_IPI_one(cpu, xen_vector);
> >>         }
> >>  }
> >>
> >>  void xen_send_IPI_allbutself(int vector)
> >>  {
> >> -       int xen_vector = xen_map_vector(vector);
> >> -
> >> -       if (xen_vector >= 0)
> >> -               xen_send_IPI_mask_allbutself(cpu_online_mask, xen_vector);
> >> +       xen_send_IPI_mask_allbutself(cpu_online_mask, vector);
> >>  }
> >>
> >>  static irqreturn_t xen_call_function_interrupt(int irq, void *dev_id)
> >> diff --git a/arch/x86/xen/smp.h b/arch/x86/xen/smp.h
> >> index 8981a76..c7c2d89 100644
> >> --- a/arch/x86/xen/smp.h
> >> +++ b/arch/x86/xen/smp.h
> >> @@ -5,7 +5,6 @@ extern void xen_send_IPI_mask(const struct cpumask *mask,
> >>  extern void xen_send_IPI_mask_allbutself(const struct cpumask *mask,
> >>                                 int vector);
> >>  extern void xen_send_IPI_allbutself(int vector);
> >> -extern void physflat_send_IPI_allbutself(int vector);
> >>  extern void xen_send_IPI_all(int vector);
> >>  extern void xen_send_IPI_self(int vector);
> >>
> >> -- 
> >> 1.7.9.5
> >>
> > 
> > 
> 
> 



> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.