[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] tpm: Restore functionality to xen vtpm driver.



Functionality of the xen-tpmfront driver was lost secondary to
the introduction of xenbus multi-page support in the following
commit:

ccc9d90a9a8b5c4ad7e9708ec41f75ff9e98d61d

xenbus_client: Extend interface to support multi-page ring

In this commit a pointer to the shared page address was being
passed to the xenbus_grant_ring() function rather then the
address of the shared page itself.  This resulted in a situation
where the driver would attach to the vtpm-stubdom but any attempt
to send a command to the stub domain would timeout.

A diagnostic finding for this regression is the following error
message being generated when the xen-tpmfront driver probes for a
device:

<3>vtpm vtpm-0: tpm_transmit: tpm_send: error -62

<3>vtpm vtpm-0: A TPM error (-62) occurred attempting to determine the timeouts

This fix is relevant to all kernels from 4.1 forward which is the
release in which multi-page xenbus support was introduced.

Daniel De Graaf formulated the fix by code inspection after the
regression point was located.

Signed-off-by: Dr. Greg Wettstein <greg@xxxxxxxxxxxx>
---
 drivers/char/tpm/xen-tpmfront.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
index 5aaa268..dd83a07 100644
--- a/drivers/char/tpm/xen-tpmfront.c
+++ b/drivers/char/tpm/xen-tpmfront.c
@@ -203,7 +203,7 @@ static int setup_ring(struct xenbus_device *dev, struct 
tpm_private *priv)
                return -ENOMEM;
        }
 
-       rv = xenbus_grant_ring(dev, &priv->shr, 1, &gref);
+       rv = xenbus_grant_ring(dev, priv->shr, 1, &gref);
        if (rv < 0)
                return rv;
 
-- 
2.2.2


-- 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.