[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v1 3/5] xen/arm: introduce SCMI-SMC mediator driver


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Oleksii Moisieiev <Oleksii_Moisieiev@xxxxxxxx>
  • Date: Tue, 25 Jan 2022 14:35:16 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=m4OItFEEou2ijl4hNZRg6OdsiIV6cGGNLI7GmZI/iDw=; b=nNFLdfNRguOpqlqBPaj1nyI2plrrMpYQdiCgRYrNeBSlXXqCqZ9P9dECPL0nxeq15cVZCyXuOrguifS53zFFDBZFFoi1tHZO92vqK7zwX+KRU4lcwTGchOEBBs8RQSd270rHqmmiyzocDpW3Y0A5peacGH/toPFCnBuNFaYezrXETlcqkAmGnIHoK33uBltfxSJ2I37LfOwSxR+O58LW3KJsAfU4D7sm7cYtreDL6sxndIJttV3rWbl2OvlauIK8vbmqTlnlCmibIhjUOT485N9CQGXMLEiplOODpWZwYi3MQQ6n9Fzt5p1I4Uvoi1TVdE8cnEQ1f7gkJgFK1SzbVg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iNHcPxQJSir6kpQLAZ39GFbcQkb85yNraDGovONUVIRl8W6F9FKnjG1RWJ9q7XpPVIOY1ulBkgoXW49rS9cWwypG5gwq6zNHEuyliYN0BRfwGGY3YAWnKIV2+4nRKCUZvqqH+jeVOjFDrbAsQXmu/GuDZXrQvrixqEy4TzaHL9fDs05L6uY5ZHrt+2GLuDRnIM9A+O+Gmt3Kv6DwfMFQMipqwITUEI5luHNCCMUluPTRLUsPzZI8DjH153xERZ9nxECo2S24taZzYVshBwQagea5dGOSyzTCprbG/oHx9jInBppCY01EPh7nYjDvGi+U+vf1qAhQ0OYwbqDXZJZ6Iw==
  • Cc: Julien Grall <julien@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>
  • Delivery-date: Tue, 25 Jan 2022 14:35:42 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHX8M3JF7Ng56/tV0+8/7pODiaWfKw3iHyAgAQwJICAAG+xgIABQZuAgAAWSACAAOV/AIABANAAgCsRKgCAAOC8gIAAlQqAgADLX4CAARblgIAAX5CAgASN5YCAAAwYAIAABamAgAAvCoCAARH1gA==
  • Thread-topic: [RFC v1 3/5] xen/arm: introduce SCMI-SMC mediator driver

On Mon, Jan 24, 2022 at 02:14:43PM -0800, Stefano Stabellini wrote:
> On Mon, 24 Jan 2022, Julien Grall wrote:
> > On 24/01/2022 19:06, Stefano Stabellini wrote:
> > > It looks like XEN_DOMCTL_host_node_by_path and
> > > XEN_DOMCTL_find_host_compatible_node would also solve the problem but I
> > > think that a single hypercall that retrieves the entire host DTB would
> > > be easier to implement
> > 
> > DOMCTL should only be used to handle per-domain information. If we want to
> > create a new sub-hypercall of either __HYPERVISOR_platform_op or
> > __HYPERVISOR_sysctl_op (not sure which one).
> > 
> > AFAICT, both are versioned.
> > 
> > > and more robust in the long term. >
> > > hypfs has the advantage that it would create an interface more similar
> > > to the one people are already used to on Linux systems
> > > (/proc/device-tree). xl/libxl would have to scan the whole hypfs tree,
> > > which intuitively I think it would be slower.
> > 
> > Even if you have the binary blob, you would still have to scan the
> > device-tree. That said, it is probably going to be potentially a bit faster
> > because you have less hypercall.
> > 
> > However, here this is a trade-off between memory use and speed. If you want
> > speed, then you may have to transfer up to 2MB every time. So the question 
> > is
> > do we care more about speed or memory usage?
> > 
> > > Also the feature might be
> > > harder to implement but I am not sure.
> > > 
> > > I don't have a strong preference and this is not a stable interface (we
> > > don't have to be extra paranoid about forward and backward
> > > compatibility). So I am fine either way. Let's see what the others think
> > > as well.
> > 
> > My preference would be to use hypfs as this is cleaner than exposing a blob.
> 
> That's also fine by me. Probably the hypfs implementation shouldn't be
> much more difficult than something like
> XEN_DOMCTL_host_node_by_path/XEN_DOMCTL_find_host_compatible_node.
> 
> 
> > However, are we sure we can simply copy the content of the host Device-Tree 
> > to
> > the guest Device-Tree for SCMI? For instance, I know that for device
> > passthrough there are some property that needs to be altered for some 
> > devices.
> > Hence, why it is not present. Although, I vaguely recalled to have written a
> > PoC, not sure if it was posted on the ML.
> 
> The SCMI node cannot be copied "as is" from host to guest. It needs a
> couple of changes but they seem feasible as they are limited to the
> channels exposed to the guest. (The generic device passthrough case is a
> lot more difficult.)


Hi Stefano,

What I'm thinking about is do we actually need to create SCMI node in DomU 
device-tree?
I have this question is because we don't need SCMI node to be present in DomU 
device-tree if it has no passed-through devices, which are using scmi. 
So if we don't have passed-through devices or do not provide DomU partial 
device-tree 
in config, then there is no need to create SCMI node.

For now I see the following possible domu configurations:
1) If DomU has a lot of passed-through devices and it's easier to inherit 
host device-tree and disable not passed-through devices.
Partial device tree will looks like this:

#include "r8a77961-salvator-xs.dts" //include host device tree

/
{
        soc {
                ...
        }

};

// Disable non passed-through devices
&hscif {
        status = "disabled";
};

In this case DomU partial device-tree will inherit arm,scmi-smc and 
arm,scmi-shmem nodes and all clock/reset/power-domains which are using scmi. 
All this nodes can be copied to DomU device-tree from partial device-tree.

2) DomU has few passed-through devices, so it's easier to add the device nodes 
to the passthrough node of DomU partial device-tree.
DomU partial device-tree will look like this:
{
        scmi_shmem: scp-shmem@0x53FF0000 {
                compatible = "arm,scmi-shmem";
                reg = <0x0 0x53FF0000 0x0 0x10000>;  
        };
        scmi {
                arm,smc-id = <....>;
                compatible = "arm,scmi-smc"; 
                shmem = <&scmi_shmem>;
                scmi_clock: protocol@14 {
                        ...
                };
                scmi_reset: protocol@16 {
                        ...
                };
        }; 
        passthrough {
                hscif0: serial@e6540000 { 
                        compatible = "renesas,hscif-r8a77961";
                        scmi_devid = <5>;
                        clocks = <&scmi_clock 5>;
                        resets = <&scmi_reset 5>;
                        ...
                };
        };
};

As you can see in this case we have to manually copy arm,scmi-shmem and 
arm,scmi-smc nodes with hscif0 node or the device-tree compilation will fail.
We can use 0x53FF0000, provided in arm,scmi-shmem node and map domain channel 
to this address and copy scmi related nodes to the DomU device-tree.
This is useful when we need to expose only certain protocols to the DomU. 
Also it's easy to modify DomU scmi node, as we need for stm32mp1 for example 
when different smc-id should be set for DomU.

3) DomU doesn't have any passthrough nodes, which are using scmi.
In this case we don't want SCMI nodes to be in the DomU device-tree.

I see only one use-case when we may need scmi nodes to be generated by xl in 
DomU device-tree:
Xen generates psci node to handle cpu_on and cpu_off. 
According to the Section 4.3.2.5 of the DEN0056C [1]:
> For these power domains, this protocol can be used to implement PSCI 
> CPU_SUSPEND, CPU_ON, CPU_FREEZE, CPU_DEFAULT_SUSPEND and CPU_OFF functions.

So in theory psci node can use scmi to control cpu state. But this is not our 
use-case because we don't want to give DomU ability to stop physical CPU. 
Xen can't intercept and handle CPU_ON and CPU_OFF requests when mailbox 
transport 
is used for SCMI communication.

[1] "SCMI Specification DEN0056C," [Online]. Available: 
https://developer.arm.com/documentation/den0056/latest 

Best regards,
Oleksii.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.