[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 1/5] xen/arm: Add support for GIC v3



On Fri, 2014-07-11 at 18:21 +0530, vijay.kilari@xxxxxxxxx wrote:
> +static inline void gicv3_restore_lrs(int nr_lrs, const struct vcpu *v)

Other similar functions you have here use gicv3.nr_foo directly.

> +    case 1:
> +        WRITE_SYSREG(v->arch.gic.v3.lr[0], ICH_LR0_EL2);
> +         break;

Nit: whitespace.

> +static void gicv3_save_state(struct vcpu *v)
> +{
> +
> +    /* No need for spinlocks here because interrupts are disabled around
> +     * this call and it only accesses struct vcpu fields that cannot be
> +     * accessed simultaneously by another pCPU.
> +     *
> +     * Make sure all stores to the GIC via the memory mapped interface
> +     * are now visible to the system register interface
> +     */
> +    dsb(sy);
> +    gicv3_save_lrs(gicv3_info.nr_lrs, v);
> +    save_aprn_regs(&v->arch.gic);
> +    v->arch.gic.v3.vmcr = READ_SYSREG32(ICH_VMCR_EL2);
> +    v->arch.gic.v3.sre_el1 = READ_SYSREG32(ICC_SRE_EL1);
> +}
> +
> +static void gicv3_restore_state(const struct vcpu *v)
> +{
> +    WRITE_SYSREG32(v->arch.gic.v3.sre_el1, ICC_SRE_EL1);
> +    WRITE_SYSREG32(v->arch.gic.v3.vmcr, ICH_VMCR_EL2);
> +    restore_aprn_regs(&v->arch.gic);
> +    gicv3_restore_lrs(gicv3_info.nr_lrs, v);
> +
> +    /*
> +     * Make sure all stores are visible the GIC
> +     */
> +    dsb(sy);
> +}

A reference to Section 4.8.17 of the GICv3 spec alongside these
functions would be very useful.


> +static void __cpuinit gicv3_hyp_disable(void)
> +{
> +    uint32_t vtr;
> +
> +    vtr = READ_SYSREG32(ICH_HCR_EL2);
> +    vtr &= ~0x1;

Please can you add a define for this value, or reuse GICH_HCR_EN if
ICH_HCR_EL2 is compatible with GICv2 GICH_HCR.

Also calling the temporary vtr is confusing, ICH_VTR_EL2 is a different
register.

> +static u16 gicv3_compute_target_list(int *base_cpu, const struct cpumask 
> *mask,
> +                                     uint64_t cluster_id)
> +{
> +    int cpu = *base_cpu;
> +    uint64_t mpidr = cpu_logical_map(cpu);
> +    u16 tlist = 0;
> +
> +    while ( cpu < nr_cpu_ids )
> +    {
> +        /*
> +         * If we ever get a cluster of more than 16 CPUs, just
> +         * scream and skip that CPU.
> +         */
> +        if ( (mpidr & 0xff) >= 16 )

MPIPDR_EFF0_MASK if that's what this is, and at least once more in this
function.

> +        {
> +            dprintk(XENLOG_WARNING, "GICv3:Cluster with more than 16's 
> cpus\n");
> +            goto out;

Please validate this and complain in gicv3_init not every time we send
an SGI.

> +static void gicv3_send_sgi(enum gic_sgi sgi, enum gic_sgi_mode mode,
> +                           const cpumask_t *cpumask)
> +{
> +    int cpu = 0;
> +    uint64_t val;
> +
> +    for_each_cpu(cpu, cpumask)
> +    {
> +        /* Mask lower 8 bits. It represent cpu in affinity level 0 */
> +        uint64_t cluster_id = cpu_logical_map(cpu) & ~0xffUL;

Use MPIDR_AFF0_MASK not 0xff.

> diff --git a/xen/include/asm-arm/gic_v3_defs.h 
> b/xen/include/asm-arm/gic_v3_defs.h
> new file mode 100644
> index 0000000..1c32e0c
> --- /dev/null
> +++ b/xen/include/asm-arm/gic_v3_defs.h

> +#ifndef __ASM_ARM_GIC_V3_DEFS_H__
> +#define __ASM_ARM_GIC_V3_DEFS_H__
> +
> +/*
> + * Additional registers defined in GIC v3.
> + * Common GICD registers are defined in gic.h
> + */
> +
> +#define GICD_STATUSR                 (0x010)
> [...][
> +#define GICV3_GICD_PIDR0             (0x92)

What is the distinction between variables with GIC[DR]_ prefixes and
those with GICV3_GIC[DR]_ ones?

> diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
> index e81b80e..7aef378 100644
> --- a/xen/include/xen/lib.h
> +++ b/xen/include/xen/lib.h
> @@ -67,6 +67,8 @@ do {                                                        
>     \
>  
>  #define reserve_bootmem(_p,_l) ((void)0)
>  
> +#define SZ_64K  0x00010000

This was already discussed in v6.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.