WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Only include online cpus in cpu_mask_to_apicid_f

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Only include online cpus in cpu_mask_to_apicid_flat
From: "Yang, Sheng" <sheng.yang@xxxxxxxxx>
Date: Mon, 6 Sep 2010 13:56:00 +0800
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>
Delivery-date: Sun, 05 Sep 2010 22:56:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <201009011802.14274.sheng.yang@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Intel Opensource Technology Center
References: <C8A29C14.21704%keir.fraser@xxxxxxxxxxxxx> <4C7E35AC0200007800013A65@xxxxxxxxxxxxxxxxxx> <201009011802.14274.sheng.yang@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.13.2 (Linux/2.6.32-24-generic; KDE/4.4.2; x86_64; ; )
On Wednesday 01 September 2010 18:02:13 Yang, Sheng wrote:
> On Wednesday 01 September 2010 17:14:52 Jan Beulich wrote:
> > >>> On 01.09.10 at 05:39, "Yang, Sheng" <sheng.yang@xxxxxxxxx> wrote:
> > > Yes, here is the patch with modification of other variants.
> > 
> > If indeed an adjustment like this is needed, then this (and other similar
> > instances)
> > 
> > >@@ -71,6 +72,11 @@
> > >
> > > unsigned int cpu_mask_to_apicid_phys(cpumask_t cpumask)
> > > {
> > >
> > >+  int cpu;
> > >
> > >   /* As we are using single CPU as destination, pick only one CPU here
> > >   */
> > >
> > >-  return cpu_physical_id(first_cpu(cpumask));
> > >+  for_each_cpu_mask(cpu, cpumask) {
> > >+          if (cpu_online(cpu))
> > >+                  break;
> > >+  }
> > >+  return cpu_physical_id(cpu);
> > >
> > > }
> > 
> > is both insufficient: You need to handle the case where you don't
> > find any online CPU in the mask (at least by adding a respective
> > BUG_ON()).
> 
> Yes, BUG_ON() is needed.
> 
> > But I tend to agree with Keir that this shouldn't be done here -
> > these functions are simple accessors, which shouldn't enforce
> > any policy. Higher level code, if it doesn't already, should be
> > adjusted to never allow offline CPUs to slip through.
> 
> Well, I think it's acceptable to add a wrap function for it. So how about
> this one?

Keir & Jan, how do you think about this patchset?

If you still think we should never allow offline CPUs in the cpu_mask, then at 
least 
we need one patch to fix serial port IRQ's cpu_mask which was CPU_MASK_ALL(this 
fix 
would also result the serial interrupt delivery to CPU0). 

--
regards
Yang, Sheng

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -1015,7 +1015,7 @@
         irq_vector[irq] = FIRST_HIPRIORITY_VECTOR + seridx + 1;
         per_cpu(vector_irq, cpu)[FIRST_HIPRIORITY_VECTOR + seridx + 1] = irq;
         irq_cfg[irq].vector = FIRST_HIPRIORITY_VECTOR + seridx + 1;
-        irq_cfg[irq].cpu_mask = (cpumask_t)CPU_MASK_ALL;
+        irq_cfg[irq].cpu_mask = cpu_online_map;
     }

     /* IPI for cleanuping vectors after irq move */

Attachment: dest_fix.patch
Description: Text Data

Attachment: serial_fix.patch
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>