[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 4/6] xen/arm: zynqmp: eemi access control



(Resending the e-mail using a different smtp)

On 15/10/2018 08:25, Julien Grall wrote:
Hi,

On 10/10/2018 11:35 PM, Stefano Stabellini wrote:
On Tue, 28 Aug 2018, Julien Grall wrote:
On 11/08/18 01:01, Stefano Stabellini wrote:
   #include <xen/iocap.h>
   #include <xen/sched.h>
   #include <xen/types.h>
@@ -23,6 +91,721 @@
   #include <asm/regs.h>
   #include <asm/platforms/xilinx-zynqmp-eemi.h>
   +struct pm_access
+{
+    paddr_t addr;

It seems that the address will always page-aligned. So could we store a frame
using mfn_t?

Yes we could store just the frame. I started to make the change
suggested, and all the required corresponding changes to the
initializations below, for instance:

   [NODE_RPU] = { MM_RPU },

needs to become:

   [NODE_RPU] = { _mfn(MM_RPU) },

but when I tried to do that, I get a compilation error:

   xilinx-zynqmp-eemi.c:106:20: error: initializer element is not constant
        [NODE_RPU] = { _mfn(MM_RPU) },

Unfortunately it is not possible to use mfn_t in static initializations,
because it is a static inline. I could do:


This is a bug in GCC older than 5.0.

   [NODE_RPU] = { { MM_RPU } },

which would work for DEBUG builds but wouldn't work for non-DEBUG
builds.

I'll keep paddr_t for the next version of the series.

What is the issue with that on non-debug build? We are using this construction in another place without any compiler issue.



+    bool hwdom_access;    /* HW domain gets access regardless.  */
+};
+
+/*
+ * This table maps a node into a memory address.
+ * If a guest has access to the address, it has enough control
+ * over the node to grant it access to EEMI calls for that node.
+ */
+static const struct pm_access pm_node_access[] = {

[...]

+
+/*
+ * This table maps reset line IDs into a memory address.
+ * If a guest has access to the address, it has enough control
+ * over the affected node to grant it access to EEMI calls for
+ * resetting that node.
+ */
+#define XILPM_RESET_IDX(n) (n - XILPM_RESET_PCIE_CFG)
+static const struct pm_access pm_reset_access[] = {

[...]

+
+/*
+ * This table maps reset line IDs into a memory address.
+ * If a guest has access to the address, it has enough control
+ * over the affected node to grant it access to EEMI calls for
+ * resetting that node.
+ */
+static const struct {
+    paddr_t start;
+    paddr_t size;
+    uint32_t mask;   /* Zero means no mask, i.e all bits.  */
+    enum pm_node_id node;
+    bool hwdom_access;
+    bool readonly;
+} pm_mmio_access[] = {

Those 3 arrays contains a lot of hardcoded value. Can't any of this be
detected from the device-tree?

No, the information is not available on device tree unfortunately. >

I would be interested to know how this is going to work with upstream Linux.
Do you hardcode all the values there as well?

Yes: the IDs are specified on an header file, see
include/linux/firmware/xlnx-zynqmp.h on the zynq/firmware branch of the
arm-soc tree. In addition to the IDs, we also have the memory addresses
in Xen to do the permission checks.

I am afraid this is not linux upstream. Can you point to the code in Linus's git or explain the state of the review?


+static bool pm_check_access(const struct pm_access *acl, struct domain *d,
+                            uint32_t idx)
+{
+    unsigned long mfn;

mfn_t please. The code is not that nice but at least it add more safety in the code. Hopefully iommu_access_permitted will be converted to typesafe MFN soon.

Yes, I can make this change without issues.


+
+    if ( acl[idx].hwdom_access && is_hardware_domain(d) )
+        return true;
+
+    if ( !acl[idx].addr )
+        return false;
+
+    mfn = PFN_DOWN(acl[idx].addr);

maddr_to_mfn(...);

OK


+    return iomem_access_permitted(d, mfn, mfn);

Is the address something that a guest would be allowed to read/write directly?
If not, then iomem_access_permitted(...) should not be used.

Yes, the address would be something remapped to the guest using iomem.

In the documentation at the beginning of the file you say that memory ranges are typically secure memory. So how a guest can access them directly?

I probably need a bit more background here... What is the address correspond to at the end?


+{
+    unsigned int i;
+    bool ret = false;
+    uint32_t prot_mask = 0;
+
+    /*
+     * The hardware domain gets read access to everything.
+     * Lower layers will do further filtering.
+     */
+    if ( !write && is_hardware_domain(d) )
+        return true;
+
+    /* Scan the ACL.  */
+    for ( i = 0; i < ARRAY_SIZE(pm_mmio_access); i++ )
+    {
+        if ( addr < pm_mmio_access[i].start )
+            return false;
+        if ( addr > pm_mmio_access[i].start + pm_mmio_access[i].size )

I would add an ASSERT(pm_mmio_access[i].start >= pm_mmio_access[i].size) to
catch wrapping.

I take that you meant the following:

   ASSERT(pm_mmio_access[i].start + pm_mmio_access[i].size >= pm_mmio_access[i].start)

Yes.



+            continue;
+
+        if ( write && pm_mmio_access[i].readonly )
+            return false;
+        if ( pm_mmio_access[i].hwdom_access && !is_hardware_domain(d) )
+            return false;
+        if ( !domain_has_node_access(d, pm_mmio_access[i].node) )
+            return false;
+
+        /* We've got access to this reg (or parts of it).  */
+        ret = true;
+
+        /* Permit write access to selected bits.  */
+        prot_mask |= pm_mmio_access[i].mask ?: 0xFFFFFFFF;
Can you use GENMASK here?

Sure

NIT: newline

OK

+        break;
+    }
+
+    /* Masking only applies to writes.  */
+    if ( write )
+        *mask &= prot_mask;

So for reading there are no masking? What should be the value it?

Yes, this is my understanding. The value is safe to read, but not all
bits are writeable.

It would be good to write that assumption somewhere.

Cheers,


--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.