WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ppc-devel

[XenPPC] multicast function invocations

To summarize the situation, I found two problems.

1. Core Xen has a bug (I believe) in which they do not mark their cpu
   mask volatile, so the compiler generates an infinite loop in read_clocks.

   I will send some patches upstream to resolve this issue.

2. Xen/PPC has a problem in that its IPI callbacks (remote function 
   invocations) do not actually happen in parallel, which breaks the
   design of read_clocks.  Our IPI callbacks are serialized by the
   design we copied from Xen/x86, which is to acquire a per-vector lock
   very early in the EE handling path (see do_external).

   I guess my real question is: will Xen/PPC ever in the future run its
   IPI remote function callbacks with EE enabled?  If the plan is to
   keep things the way they are now, then we should remove the
   per-vector lock entirely.

The following is a patch that implements the above two conclusions and
which allows 'C-aC-aC-at' to work properly.  Comments?

---

 arch/powerpc/external.c |    2 --
 common/keyhandler.c     |    2 +-
 include/xen/cpumask.h   |    2 +-
 3 files changed, 2 insertions(+), 4 deletions(-)

diff -r 305751a5281e xen/arch/powerpc/external.c
--- a/xen/arch/powerpc/external.c       Wed Nov 22 16:29:25 2006 -0500
+++ b/xen/arch/powerpc/external.c       Tue Nov 28 03:07:10 2006 -0500
@@ -86,11 +86,9 @@ void do_external(struct cpu_user_regs *r
         /* do_IRQ is fundamentally broken for reliable IPI delivery.  */
         irq_desc_t *desc = &irq_desc[vec];
         regs->entry_vector = vec;
-        spin_lock(&desc->lock);
         desc->handler->ack(vec);
         desc->action->handler(vector_to_irq(vec), desc->action->dev_id, regs);
         desc->handler->end(vec);
-        spin_unlock(&desc->lock);
     } else if (vec != -1) {
         DBG("EE:0x%lx isrc: %d\n", regs->msr, vec);
         regs->entry_vector = vec;
diff -r 305751a5281e xen/common/keyhandler.c
--- a/xen/common/keyhandler.c   Wed Nov 22 16:29:25 2006 -0500
+++ b/xen/common/keyhandler.c   Tue Nov 28 03:06:24 2006 -0500
@@ -193,7 +193,7 @@ static void dump_domains(unsigned char k
     read_unlock(&domlist_lock);
 }
 
-static cpumask_t read_clocks_cpumask = CPU_MASK_NONE;
+static cpumask_t volatile read_clocks_cpumask = CPU_MASK_NONE;
 static s_time_t read_clocks_time[NR_CPUS];
 
 static void read_clocks_slave(void *unused)
diff -r 305751a5281e xen/include/xen/cpumask.h
--- a/xen/include/xen/cpumask.h Wed Nov 22 16:29:25 2006 -0500
+++ b/xen/include/xen/cpumask.h Tue Nov 28 03:06:24 2006 -0500
@@ -177,7 +177,7 @@ static inline int __cpus_subset(const cp
 }
 
 #define cpus_empty(src) __cpus_empty(&(src), NR_CPUS)
-static inline int __cpus_empty(const cpumask_t *srcp, int nbits)
+static inline int __cpus_empty(const cpumask_t volatile *srcp, int nbits)
 {
        return bitmap_empty(srcp->bits, nbits);
 }

_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel