[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH][BIOS]Fix TPMD and QEMU connection




xen-devel-bounces@xxxxxxxxxxxxxxxxxxx wrote on 12/19/2007 07:53:11 PM:

> Hi,
>
> In HVM domain, MA_Transmit function in tcgbios sometimes become an error
> (TCG_NO_RESPONSE). The cause of the error is not to make connection of
> QEMU and TPMD instance within a timeout of MA_Transmit function.
>
> Before the MA_Transmit function was called, the attached patch corrected so
> that connection of QEMU and TPMD might be completed.
>
> Signed-off-by: Kouichi YASAKI <yasaki.kouichi@xxxxxxxxxxxxxx>
>
> Thanks
>   Kouichi YASAKI
>
> diff -r d9ab9eb2bfee tools/ioemu/hw/tpm_tis.c
> --- a/tools/ioemu/hw/tpm_tis.c   Sat Dec 15 18:29:27 2007 +0000
> +++ b/tools/ioemu/hw/tpm_tis.c   Mon Dec 17 19:46:42 2007 +0900
> @@ -904,6 +904,10 @@ void tpm_tis_init(SetIRQFunc *set_irq, v
>      memset(s->buffer.buf,0,sizeof(s->buffer.buf));
>  
>      register_savevm("tpm-tis", 0, 1, tpm_save, tpm_load, s);
> +
> +    while(!IS_COMM_WITH_VTPM(s)){
> +       open_vtpm_channel(s);
> +    }
>  }


I'll have a look at this. The problem probably stems from the vTPM manager starting the vTPM up too late while qemu is already up and running, so it's a timing problem between the two processes. I don't think it should try to connect endlessly in a busy loop. At least there should be a counter that tries this for maybe 5 times followed by a [u]sleep() in the loop.

    int ctr = 0;
    while(!IS_COMM_WITH_VTPM(s) && ctr < 5){
      open_vtpm_channel(s);

       ctr ++;
       sleep(1);
   }


   Stefan

>  
>  
> /****************************************************************************/
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.