[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/2] xen/arm : emulation of arm's PSCI v0.2 standard



On Thu, 2014-07-03 at 17:52 +0530, Parth Dixit wrote:
>  static void do_trap_psci(struct cpu_user_regs *regs)
>  {
>      arm_psci_fn_t psci_call = NULL;
>  
> -    if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) )
> +    if ( PSCI_OP_REG(regs) < PSCI_migrate )
>      {
> -        domain_crash_synchronous();
> -        return;
> +        if ( PSCI_OP_REG(regs) >= ARRAY_SIZE(arm_psci_table) )
> +        {
> +            domain_crash_synchronous();
> +            return;
> +        }
> +        psci_call = arm_psci_table[PSCI_OP_REG(regs)].fn;
> +    }
> +    else
> +    {
> +        if ( ( PSCI_OP_REG(regs) & PSCI_FN_MASK ) >= 
> ARRAY_SIZE(arm_psci_0_2_table) )

I think you need to also check that "PSCI_OP_REG(regs) & ~PSCI_FN_MASK"
is either the 32-bit or 64-bit base. Otherwise I could call 0xdeadfe01
and it would work.

Do you not also need to check that the guest is of the appropriate type?
Is an aarch64 guest allowed to call the aarch32 entry points? The spec
doesn't say so explicitly AFAICT.

If it is allowed then I think you need to be truncating the 32-bit
arguments to 32-bits when an aarch64 guest calls the 32-bit entry
points.

Hrm, checking through the spec I see now that only some of the functions
have both a 32- and 64-bit function id. VERSION, CPU_OFF,
MIGRATE_INFO_TYPE, SYSTEM_OFF and SYSTEM_RESET only have a single
function id (I suppose because they do not take any arguments which are
mode specific).

I'm afraid that AFAICT you need to handle this distinction, which I
expect makes it rather hard to share the same lookup table between 32-
and 64-bit. A switch(PSCI_OP_REG(regs)) is looking increasingly like the
correct approach to me.

> +int do_psci_0_2_version(void)
> +{
> +    struct domain *d = current->domain;
> +
> +    return ( d->arch.vpsci_ver = XEN_PSCI_V_0_2 );

Is this assignment really intentional?

I see you use d->arch.vpsci_ver later in the common do_psci_0_2_cpu_on
function, so I think you did mean to do this.

Does the spec say somewhere that after calling PSCI_0_2_VERSION an OS is
forbidden from calling PSCI v0.1 functions?

Likewise does it mandate that an OS which supports v0.2 always calls
VERSION before any other function?

I can't find anything in the spec which says either of those things.
Please do point out if I am wrong.

In any case I think this approach to a common function is wrong. I think
you should implement something like:

        int do_common_cpu_on(int version, register_t target_cpu, register_t 
entry_point,
                             register_t context_id)

similar to your current do_psci_0_2_cpu_suspend but using version
instead of d->arch.vpsci_ver. Then you have:

        int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point)
        {
           return do_common_cpu_on(PSCI_0_1_VERSION, vcpuid, entry_point, 0);
        }
        int do_psci_0_2_cpu_on(register_t target_cpu, register_t entry_point,
                               register_t context_id)
        {
            return do_common_cpu_on(PSCI_0_2_VERSION, target_cpu,
                                    entry_point, context_id)
        }

IOW the mode of operation is determined by the entry point used not some
bit of domain state.

> +int do_psci_0_2_cpu_on(register_t target_cpu, register_t entry_point,
> +                       register_t context_id)
> +{
>      struct vcpu *v;
>      struct domain *d = current->domain;
>      struct vcpu_guest_context *ctxt;
>      int rc;
>      int is_thumb = entry_point & 1;
> +    uint32_t vcpuid ;

Stray " " here.
> +
> +    if( d->arch.vpsci_ver == XEN_PSCI_V_0_2 )
> +        vcpuid = (u32)(target_cpu & MPIDR_HWID_MASK);

Doesn't this mask off AFF3 on 64-bit? I know we don't use AFF3 today in
Xen, but this is setting a pretty subtle trap for the person who does
come to implement something which uses it.

> @@ -75,12 +144,65 @@ int do_psci_cpu_on(uint32_t vcpuid, register_t 
> entry_point)
>      return PSCI_SUCCESS;
>  }
>  
> -int do_psci_cpu_off(uint32_t power_state)
> +static const unsigned long target_affinity_mask[] = {
> +    ( MPIDR_HWID_MASK & AFFINITY_MASK( 0 ) ),
> +    ( MPIDR_HWID_MASK & AFFINITY_MASK( 1 ) ),
> +    ( MPIDR_HWID_MASK & AFFINITY_MASK( 2 ) )
> +#ifdef CONFIG_ARM_64
> +    ,( MPIDR_HWID_MASK & AFFINITY_MASK( 3 ) )
> +#endif
> +};
> +
> +int do_psci_0_2_affinity_info(register_t target_affinity,
> +                              uint32_t lowest_affinity_level)
>  {
> -    struct vcpu *v = current;
> -    if ( !test_and_set_bit(_VPF_down, &v->pause_flags) )
> -        vcpu_sleep_nosync(v);
> -    return PSCI_SUCCESS;
> +    struct domain *d = current->domain;
> +    struct vcpu *v;
> +    uint32_t vcpuid;
> +
> +    if ( lowest_affinity_level < ARRAY_SIZE(target_affinity_mask) )
> +        target_affinity &= target_affinity_mask[lowest_affinity_level];
> +    else
> +        return PSCI_INVALID_PARAMETERS;
> +
> +    for ( vcpuid = 0; vcpuid < d->max_vcpus; vcpuid++ )
> +    {
> +        v = d->vcpu[vcpuid];
> +
> +        if ( ( ( v->arch.vmpidr & 
> target_affinity_mask[lowest_affinity_level] )

if you assign target_affinity_mask[lowest_affinity_level] to a local
variable you can avoid the awful state of line wrapping here.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.