|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [XEN PATCH 1/4] x86/mce: address MISRA C:2012 Rule 5.3
On Wed, 2 Aug 2023, Nicola Vetrini wrote:
> Suitable mechanical renames are made to avoid shadowing, thus
> addressing violations of MISRA C:2012 Rule 5.3:
> "An identifier declared in an inner scope shall not hide an
> identifier declared in an outer scope"
>
> Signed-off-by: Nicola Vetrini <nicola.vetrini@xxxxxxxxxxx>
> ---
> xen/arch/x86/cpu/mcheck/barrier.c | 8 ++++----
> xen/arch/x86/cpu/mcheck/barrier.h | 8 ++++----
> 2 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/xen/arch/x86/cpu/mcheck/barrier.c
> b/xen/arch/x86/cpu/mcheck/barrier.c
> index a7e5b19a44..51a1d37a76 100644
> --- a/xen/arch/x86/cpu/mcheck/barrier.c
> +++ b/xen/arch/x86/cpu/mcheck/barrier.c
> @@ -16,11 +16,11 @@ void mce_barrier_dec(struct mce_softirq_barrier *bar)
> atomic_dec(&bar->val);
> }
>
> -void mce_barrier_enter(struct mce_softirq_barrier *bar, bool wait)
> +void mce_barrier_enter(struct mce_softirq_barrier *bar, bool do_wait)
"wait" clashes with xen/common/sched/core.c:wait, which is globally
exported, right?
I think it would be good to add this info to the commit message in this
kind of patches.
> {
> int gen;
>
> - if ( !wait )
> + if ( !do_wait )
> return;
> atomic_inc(&bar->ingen);
> gen = atomic_read(&bar->outgen);
> @@ -34,11 +34,11 @@ void mce_barrier_enter(struct mce_softirq_barrier *bar,
> bool wait)
> }
> }
>
> -void mce_barrier_exit(struct mce_softirq_barrier *bar, bool wait)
> +void mce_barrier_exit(struct mce_softirq_barrier *bar, bool do_wait)
> {
> int gen;
>
> - if ( !wait )
> + if ( !do_wait )
> return;
> atomic_inc(&bar->outgen);
> gen = atomic_read(&bar->ingen);
> diff --git a/xen/arch/x86/cpu/mcheck/barrier.h
> b/xen/arch/x86/cpu/mcheck/barrier.h
> index c4d52b6192..5cd1b4e4bf 100644
> --- a/xen/arch/x86/cpu/mcheck/barrier.h
> +++ b/xen/arch/x86/cpu/mcheck/barrier.h
> @@ -32,14 +32,14 @@ void mce_barrier_init(struct mce_softirq_barrier *);
> void mce_barrier_dec(struct mce_softirq_barrier *);
>
> /*
> - * If @wait is false, mce_barrier_enter/exit() will return immediately
> + * If @do_wait is false, mce_barrier_enter/exit() will return immediately
> * without touching the barrier. It's used when handling a
> * non-broadcasting MCE (e.g. MCE on some old Intel CPU, MCE on AMD
> * CPU and LMCE on Intel Skylake-server CPU) which is received on only
> * one CPU and thus does not invoke mce_barrier_enter/exit() calls on
> * all CPUs.
> *
> - * If @wait is true, mce_barrier_enter/exit() will handle the given
> + * If @do_wait is true, mce_barrier_enter/exit() will handle the given
> * barrier as below.
> *
> * Increment the generation number and the value. The generation number
> @@ -53,8 +53,8 @@ void mce_barrier_dec(struct mce_softirq_barrier *);
> * These barrier functions should always be paired, so that the
> * counter value will reach 0 again after all CPUs have exited.
> */
> -void mce_barrier_enter(struct mce_softirq_barrier *, bool wait);
> -void mce_barrier_exit(struct mce_softirq_barrier *, bool wait);
> +void mce_barrier_enter(struct mce_softirq_barrier *, bool do_wait);
> +void mce_barrier_exit(struct mce_softirq_barrier *, bool do_wait);
You might as well add "bar" as first parameter?
> void mce_barrier(struct mce_softirq_barrier *);
>
> --
> 2.34.1
>
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |