WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH][BIOS]Fix TPMD and QEMU connection

To: Stefan Berger <stefanb@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH][BIOS]Fix TPMD and QEMU connection
From: Kouichi Yasaki <yasaki.kouichi@xxxxxxxxxxxxxx>
Date: Thu, 20 Dec 2007 12:12:51 +0900
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 19 Dec 2007 19:14:21 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <OF32654215.919BF0AD-ON852573B7.000C6FFA-852573B7.000CDEEC@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <OF32654215.919BF0AD-ON852573B7.000C6FFA-852573B7.000CDEEC@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.9 (Windows/20071031)
Hi Stefan-san,

Thank you for correcting my patch.
I also think that it should not try to connect endlessly in a busy loop.

attached file is the corrected patch.

Thanks
  Kouichi YASAKI


xen-devel-bounces@xxxxxxxxxxxxxxxxxxx wrote on 12/19/2007 07:53:11 PM:

 > Hi,
 >
 > In HVM domain, MA_Transmit function in tcgbios sometimes become an error
 > (TCG_NO_RESPONSE). The cause of the error is not to make connection of
 > QEMU and TPMD instance within a timeout of MA_Transmit function.
 >
> Before the MA_Transmit function was called, the attached patch corrected so
 > that connection of QEMU and TPMD might be completed.
 >
 > Signed-off-by: Kouichi YASAKI <yasaki.kouichi@xxxxxxxxxxxxxx>
 >
 > Thanks
 >   Kouichi YASAKI
 >
 > diff -r d9ab9eb2bfee tools/ioemu/hw/tpm_tis.c
 > --- a/tools/ioemu/hw/tpm_tis.c   Sat Dec 15 18:29:27 2007 +0000
 > +++ b/tools/ioemu/hw/tpm_tis.c   Mon Dec 17 19:46:42 2007 +0900
 > @@ -904,6 +904,10 @@ void tpm_tis_init(SetIRQFunc *set_irq, v
 >      memset(s->buffer.buf,0,sizeof(s->buffer.buf));
> > register_savevm("tpm-tis", 0, 1, tpm_save, tpm_load, s);
 > +
 > +    while(!IS_COMM_WITH_VTPM(s)){
 > +       open_vtpm_channel(s);
 > +    }
 >  }

I'll have a look at this. The problem probably stems from the vTPM manager starting the vTPM up too late while qemu is already up and running, so it's a timing problem between the two processes. I don't think it should try to connect endlessly in a busy loop. At least there should be a counter that tries this for maybe 5 times followed by a [u]sleep() in the loop.

    int ctr = 0;
    while(!IS_COMM_WITH_VTPM(s) && ctr < 5){
      open_vtpm_channel(s);
       ctr ++;
       sleep(1);
   }


   Stefan

> > > /****************************************************************************/
 > _______________________________________________
 > Xen-devel mailing list
 > Xen-devel@xxxxxxxxxxxxxxxxxxx
 > http://lists.xensource.com/xen-devel
diff -r 966a6d3b7408 tools/ioemu/hw/tpm_tis.c
--- a/tools/ioemu/hw/tpm_tis.c  Fri Dec 14 11:50:24 2007 +0000
+++ b/tools/ioemu/hw/tpm_tis.c  Thu Dec 20 11:56:24 2007 +0900
@@ -904,6 +904,13 @@ void tpm_tis_init(SetIRQFunc *set_irq, v
     memset(s->buffer.buf,0,sizeof(s->buffer.buf));
 
     register_savevm("tpm-tis", 0, 1, tpm_save, tpm_load, s);
+
+    int ctr = 0;
+    while(!IS_COMM_WITH_VTPM(s) && ctr < 5){
+       open_vtpm_channel(s);
+       ctr++;
+       sleep(1);
+    }
 }
 
 /****************************************************************************/
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>