[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8] xen/arm : emulation of arm's PSCI v0.2 standard



On Thu, 2014-07-31 at 12:30 +0530, Parth Dixit wrote:

> +    /* affinity values are ignored in this implementation as
> +     * at present xen does not supports affinity level greater
> +     * than 0, for all  affinity values passed we power down/ standby
> +     * the current core */
> +    if( power_state & PSCI_0_2_POWER_STATE_TYPE_MASK )
> +    {
> +        if ( is_32bit_domain(v->domain) )
> +            regs->r0 = context_id;
> +#ifdef CONFIG_ARM_64
> +        else
> +            regs->x0 = context_id;
> +#endif
> +    }
> +
> +    vcpu_block_unless_event_pending(v);
> +    return PSCI_SUCCESS;

I'm afraid this is still wrong (well, actually it is wrong but also
buggy such that it actually ends up doing the right thing for the wrong
reason...).

You must do one of two things. Either return to the instruction after
the SMC with x0==PSCI_SUCCESS *or* jump to entry_point with x0==context
id.

Here you are apparently trying to implement something which is neither
of these by setting x0==context_id but returning to the instruction
after the SMC. It is buggy though because the "return PSCI_SUCCESS" will
overwrite your x0 setting.

Since Xen doesn't actually do any low power state what we actually want
is to return PSCI_SUCCESS to the instruction after the smc in every
case, which due to the bug in the above is actually what you have
implemented.

So, the entire power_state block if block is redundant. This function
can just be:

+register_t do_psci_0_2_cpu_suspend(uint32_t power_state, register_t 
entry_point,
+                            register_t context_id)
+{
+    vcpu_block_unless_event_pending(current);
+    return PSCI_SUCCESS;
+}

Does this make sense?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.