[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v1 3/5] xen/arm: introduce SCMI-SMC mediator driver


  • To: Julien Grall <julien@xxxxxxx>
  • From: Oleksii Moisieiev <Oleksii_Moisieiev@xxxxxxxx>
  • Date: Thu, 6 Jan 2022 15:43:38 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sAlntnNhoFpYfvvabp6Qniskt45Dh4DKpT7RlcdZV00=; b=THJ+XhHWPyssz3rIu1oNELukwo2GyZJTn111d/MrqadWg1F9/+iUAIcP518wB7dBMK9HM0eRmebwMF16UGOoQCE2HHu1cjFashXX7LsWa8y0G2BZNLTg0XGC9u/qZ4KBuk3wEQHlrzCj7eZNDN7kSXTZFqDLmvHbIeBeeWkxoaifwUz8kFtiUgoDPtAtbeTc7zxtssTfpfNx/G6BqlMHIOYNfJvV921GipOHq6PmUriFdmfqGTS1X7hp4psTvDy9p6WhO44qtYaDMi9UDXsnettM3fpOqykaIZxPri4E1z3mLH06+diWIg43SiLFV78Kpnrik73TApEiE29Cl9Ftrg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YwudkKmYfSs0mTUSGxfk6D91SAii0tWIT4WWb4ASYqbRUZX/FbC+a+dEZ/Z+6TM/Wnh6rZVy/mIlh58I+RGJTNsluaC9C25sEOmPAe6g7BLX5q1zNheMyY+9chpBTe453p6OSnU00eHlA7lR+ZXxsNR+DtYpfEfBo8pCBAqQhia9kGZRs9AVHBjPPSUntV+ujAxVwxNiUOMMLA0gvnr4Aa0ELnicQWl7hcy7ffASTsbF1lN7zenyJkFPz2qxbCVNASC1gp68PIolvQo+6NqPqvlarMtlqClTznVcc0IR1dljZ2Ysfk8K/AUxI8r+EaKe6Fib5qnM2EYXNcPu+lXH3Q==
  • Cc: Oleksandr <olekstysh@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>
  • Delivery-date: Thu, 06 Jan 2022 15:43:58 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHX8M3JF7Ng56/tV0+8/7pODiaWfKw2ks6AgAAeBwCAAAQPgIAABfaAgAAsl4CABKcNgIAGOOkAgAAnH4CAD3d9gIAEwfEAgAACbgCAABxaAA==
  • Thread-topic: [RFC v1 3/5] xen/arm: introduce SCMI-SMC mediator driver

On Thu, Jan 06, 2022 at 02:02:10PM +0000, Julien Grall wrote:
> 
> 
> On 06/01/2022 13:53, Oleksii Moisieiev wrote:
> > Hi Julien,
> 
> Hi,
> 
> > 
> > On Mon, Jan 03, 2022 at 01:14:17PM +0000, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 24/12/2021 17:02, Oleksii Moisieiev wrote:
> > > > On Fri, Dec 24, 2021 at 03:42:42PM +0100, Julien Grall wrote:
> > > > > On 20/12/2021 16:41, Oleksii Moisieiev wrote:
> > > > > > >      2) What are the expected memory attribute for the regions?
> > > > > > 
> > > > > > xen uses iommu_permit_access to pass agent page to the guest. So 
> > > > > > guest can access the page directly.
> > > > > 
> > > > > I think you misunderstood my comment. Memory can be mapped with 
> > > > > various type
> > > > > (e.g. Device, Memory) and attribute (cacheable, non-cacheable...). 
> > > > > What will
> > > > > the firmware expect? What will the guest OS usually?
> > > > > 
> > > > > The reason I am asking is the attributes have to matched to avoid any
> > > > > coherency issues. At the moment, if you use 
> > > > > XEN_DOMCTL_memory_mapping, Xen
> > > > > will configure the stage-2 to use Device nGnRnE. As the result, the 
> > > > > result
> > > > > memory access will be Device nGnRnE which is very strict.
> > > > > 
> > > > 
> > > > Let me share with you the configuration example:
> > > > scmi expects memory to be configured in the device-tree:
> > > > 
> > > > cpu_scp_shm: scp-shmem@0xXXXXXXX {
> > > >         compatible = "arm,scmi-shmem";
> > > >         reg = <0x0 0xXXXXXX 0x0 0x1000>;
> > > > };
> > > > 
> > > > where XXXXXX address I allocate in alloc_magic_pages function.
> > > 
> > > The goal of alloc_magic_pages() is to allocate RAM. However, what you want
> > > is a guest physical address and then map a part of the shared page.
> > 
> > Do you mean that I can't use alloc_magic_pages to allocate shared
> > memory region for SCMI?
> Correct. alloc_magic_pages() will allocate a RAM page and then assign to the
> guest. From your description, this is not what you want because you will
> call XEN_DOMCTL_memory_mapping (and therefore replace the mapping).
> 

Ok thanks, I will refactor this part in v2.

> > 
> > > 
> > > I can see two options here:
> > >    1) Hardcode the SCMI region in the memory map
> > >    2) Create a new region in the memory map that can be used for reserving
> > > memory for mapping.
> > 
> > Could you please explain what do you mean under the "new region in the
> > memory map"?
> 
> I mean reserving some guest physical address that could be used for map host
> physical address (e.g. SCMI region, GIC CPU interface...).
> 
> So rather than hardcoding the address, we have something more flexible.
> 

Ok, I will fix that in v2.

> > 
> > > 
> > > We still have plenty of space in the guest memory map. So the former is
> > > probably going to be fine for now.
> > > 
> > > > Then I get paddr of the scmi channel for this domain and use
> > > > XEN_DOMCTL_memory_mapping to map scmi channel address to gfn.
> > > >   > Hope I wass able to answer your question.
> > > 
> > > What you provided me is how the guest OS will locate the shared memory. 
> > > This
> > > still doesn't tell me which memory attribute will be used to map the page 
> > > in
> > > Stage-1 (guest page-tables).
> > > 
> > > To find that out, you want to look at the driver and how the mapping is
> > > done. The Linux driver (drivers/firmware/arm_scmi) is using devm_ioremap()
> > > (see smc_chan_setup()).
> > > 
> > > Under the hood, the function devm_ioremap() is using PROT_DEVICE_nGnRE
> > > (arm64) which is one of the most restrictive memory attribute.
> > > 
> > > This means the firmware should be able to deal with the most restrictive
> > > attribute and therefore using XEN_DOMCTL_memory_mapping to map the shared
> > > page in stage-2 should be fine.
> > > 
> > 
> > I'm using vmap call to map channel memory (see smc_create_channel()).
> > vmap call sets PAGE_HYPERVISOR flag which sets MT_NORMAL (0x7) flag.
> 
> You want to use ioremap().
> 

I've used ioremap originally, but changed it to vmap because ioremap
doesn't support memcpy.
What if I use __vmap with MT_DEVICE_nGnRE flag?

> > Considering that protocol is synchronous and only one agent per channel is
> > expected - this works fine for now.
> > But I agree that the same memory attributes should be used in xen and
> > kernel. I fill fix that in v2.
> 
> I am a bit confused. Are you mapping the full shared memory area in Xen? If
> yes, why do you need to map the memory that is going to be shared with a
> domain?
> 

Xen should have an access to all agent channels because it should send
SCMI_BASE_DISCOVER_AGENT to each channel and receive agent_id during
scmi_probe call.


Best regards,

Oleksii


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.