[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] KVM, XEN: Fix potential race in pvclock code



Il 17/01/2014 10:41, Jan Beulich ha scritto:
> One half of this doesn't apply here, due to the explicit barriers
> that are there. The half about converting local variable accesses
> back to memory reads (i.e. eliding the local variable), however,
> is only a theoretical issue afaict: If a compiler really did this, I
> think there'd be far more places where this would hurt.

Perhaps.  But for example seqlocks get it right.

> I don't think so - this would only be an issue if the conditions used
> | instead of ||. || implies a sequence point between evaluating the
> left and right sides, and the standard says: "The presence of a
> sequence point between the evaluation of expressions A and B
> implies that every value computation and side effect associated
> with A is sequenced before every value computation and side
> effect associated with B."

I suspect this is widely ignored by compilers if A is not 
side-effecting.  The above wording would imply that

     x = a || b    =>    x = (a | b) != 0

(where "a" and "b" are non-volatile globals) would be an invalid 
change.  The compiler would have to do:

     temp = a;
     barrier();
     x = (temp | b) != 0

and I'm pretty sure that no compiler does it this way unless C11/C++11
atomics are involved (at which point accesses become side-effecting).

The code has changed and pvclock_get_time_values moved to
__pvclock_read_cycles, but I think the problem remains.  Another approach
to fixing this (and one I prefer) is to do the same thing as seqlocks:
turn off the low bit in the return value of __pvclock_read_cycles,
and drop the || altogether.  Untested patch after my name.

Paolo

diff --git a/arch/x86/include/asm/pvclock.h b/arch/x86/include/asm/pvclock.h
index d6b078e9fa28..5aec80adaf54 100644
--- a/arch/x86/include/asm/pvclock.h
+++ b/arch/x86/include/asm/pvclock.h
@@ -75,7 +75,7 @@ unsigned __pvclock_read_cycles(const struct 
pvclock_vcpu_time_info *src,
        cycle_t ret, offset;
        u8 ret_flags;
 
-       version = src->version;
+       version = src->version & ~1;
        /* Note: emulated platforms which do not advertise SSE2 support
         * result in kvmclock not using the necessary RDTSC barriers.
         * Without barriers, it is possible that RDTSC instruction reads from
diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
index 2f355d229a58..a5052a87d55e 100644
--- a/arch/x86/kernel/pvclock.c
+++ b/arch/x86/kernel/pvclock.c
@@ -66,7 +66,7 @@ u8 pvclock_read_flags(struct pvclock_vcpu_time_info *src)
 
        do {
                version = __pvclock_read_cycles(src, &ret, &flags);
-       } while ((src->version & 1) || version != src->version);
+       } while (version != src->version);
 
        return flags & valid_flags;
 }
@@ -80,7 +80,7 @@ cycle_t pvclock_clocksource_read(struct 
pvclock_vcpu_time_info *src)
 
        do {
                version = __pvclock_read_cycles(src, &ret, &flags);
-       } while ((src->version & 1) || version != src->version);
+       } while (version != src->version);
 
        if (unlikely((flags & PVCLOCK_GUEST_STOPPED) != 0)) {
                src->flags &= ~PVCLOCK_GUEST_STOPPED;
diff --git a/arch/x86/vdso/vclock_gettime.c b/arch/x86/vdso/vclock_gettime.c
index eb5d7a56f8d4..f09b09bcb515 100644
--- a/arch/x86/vdso/vclock_gettime.c
+++ b/arch/x86/vdso/vclock_gettime.c
@@ -117,7 +117,6 @@ static notrace cycle_t vread_pvclock(int *mode)
                 */
                cpu1 = __getcpu() & VGETCPU_CPU_MASK;
        } while (unlikely(cpu != cpu1 ||
-                         (pvti->pvti.version & 1) ||
                          pvti->pvti.version != version));
 
        if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.