[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/5] vTPM: event channel bind interdomain with para/hvm virtual machine




> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx
> [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Xu, Quan
> Sent: Friday, January 09, 2015 9:25 AM
> To: Daniel De Graaf; xen-devel@xxxxxxxxxxxxx
> Cc: samuel.thibault@xxxxxxxxxxxx; stefano.stabellini@xxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v2 1/5] vTPM: event channel bind interdomain
> with para/hvm virtual machine
> 
> 
> 
> > -----Original Message-----
> > From: Daniel De Graaf [mailto:dgdegra@xxxxxxxxxxxxx]
> > Sent: Friday, January 09, 2015 1:48 AM
> > To: Xu, Quan; xen-devel@xxxxxxxxxxxxx
> > Cc: samuel.thibault@xxxxxxxxxxxx; stefano.stabellini@xxxxxxxxxxxxx
> > Subject: Re: [PATCH v2 1/5] vTPM: event channel bind interdomain with
> > para/hvm virtual machine
> >
> > On 01/08/2015 11:49 AM, Xu, Quan wrote:
> > >
> > >
> > >> -----Original Message-----
> > >> From: Daniel De Graaf [mailto:dgdegra@xxxxxxxxxxxxx]
> > >> Sent: Thursday, January 08, 2015 11:55 PM
> > >> To: Xu, Quan; xen-devel@xxxxxxxxxxxxx
> > >> Cc: samuel.thibault@xxxxxxxxxxxx; stefano.stabellini@xxxxxxxxxxxxx
> > >> Subject: Re: [PATCH v2 1/5] vTPM: event channel bind interdomain
> > >> with para/hvm virtual machine
> > >>
> > >> On 01/08/2015 03:20 AM, Xu, Quan wrote:
> > >>>
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: Daniel De Graaf [mailto:dgdegra@xxxxxxxxxxxxx]
> > >>>> Sent: Wednesday, January 07, 2015 3:47 AM
> > >>>> To: Xu, Quan; xen-devel@xxxxxxxxxxxxx
> > >>>> Cc: samuel.thibault@xxxxxxxxxxxx;
> > >>>> stefano.stabellini@xxxxxxxxxxxxx
> > >>>> Subject: Re: [PATCH v2 1/5] vTPM: event channel bind interdomain
> > >>>> with para/hvm virtual machine
> > >>>>
> > >>>> On 01/06/2015 11:46 AM, Xu, Quan wrote:
> > >>>>>> -----Original Message-----
> > >>>>>> From: Daniel De Graaf [mailto:dgdegra@xxxxxxxxxxxxx] On
> > >>>>>> 12/30/2014
> > >>>>>> 11:44 PM, Quan Xu wrote:[...]
> > >>>>>>> diff --git a/extras/mini-os/tpmback.c
> > >>>>>>> b/extras/mini-os/tpmback.c
> > >>>>>> [...]
> > >>>>>>> +   domid = (domtype == T_DOMAIN_TYPE_HVM) ? 0 :
> tpmif->domid;
> > >>>>>>
> > >>>>>> Unless I'm missing something, this still assumes that the HVM
> > >>>>>> device model is located in domain 0, and so it will not work if
> > >>>>>> a stub domain is used for qemu.
> > >>>>>>
> > >>>>>
> > >>>>> QEMU is running in Dom0 as usual, so the domid is 0.
> > >>>>> as similar to Linux PV frontend driver, this frontend driver is
> > >>>>> enabled in
> > >>>> QEMU.
> > >>>>
> > >>>> This is a valid configuration of Xen and these patches do suffice
> > >>>> to make it work.  I am trying to ensure that an additional type
> > >>>> of guest setup will also work with these patches.
> > >>>>
> > >>>> A useful feature of Xen is the ability to execute the QEMU device
> > >>>> model in a domain instead of a process in dom0.  When combined
> > >>>> with driver domains for devices, this can significantly reduce
> > >>>> both the attack surface of and amount of trust required of domain 0.
> > >>>>
> > >>>>> Any doubt, feel free to contact. I will try my best to explain.
> > >>>>> I think your
> > >>>> suggestions are very helpful in previous email(Oct. 31th, 2014.
> > >>>>> ' Re: FW: [PATCH 1/6] vTPM: event channel bind interdomain with
> > >>>>> para/hvm virtual machine') Maybe this is still a vague
> > >>>>> description :(
> > >>>>
> > >>>> This is accurate but possibly incomplete.
> > >>>>
> > >>>> This is my current understanding of the communications paths and
> > >>>> support for vTPMs in Xen:
> > >>>>
> > >>>>      Physical TPM (1.2; with new patches, may also be 2.0)
> > >>>>            |
> > >>>>     [MMIO pass-through]
> > >>>>            |
> > >>>>      vtpmmgr domain
> > >>>>            |
> > >>>>     [minios tpmback/front] ----- ((other domains' vTPMs))
> > >>>>            |
> > >>>>       vTPM domain (currently always emulates a TPM v1.2)
> > >>>>            |
> > >>>>     [minios tpmback]+----[Linux tpmfront]-- PV Linux domain
> > >>>> (fully
> > working)
> > >>>>            |         \
> > >>>>            |          +--[Linux tpmfront]-- HVM Linux with
> optional
> > PV
> > >>>> drivers
> > >>>>            |           \
> > >>>>     [QEMU XenDevOps]  [minios or Linux tpmfront]
> > >>>>            |                  |
> > >>>>     QEMU dom0 process   QEMU stub-domain
> > >>>>            |                  |
> > >>>>     [MMIO emulation]   [MMIO emulation]
> > >>>>            |                  |
> > >>>>       Any HVM guest      Any HVM guest
> > >>>>
> > >>>
> > >>> Great, good architecture. The following part is not put into
> > >>> account in my
> > >> previous design.
> > >>>
> > >>> [minios or Linux tpmfront]
> > >>>           |
> > >>>     QEMU stub-domain
> > >>>           |
> > >>>    [MMIO emulation]
> > >>>           |
> > >>>      Any HVM guest
> > >>>
> > >>> Thanks Graaf for sharing your design.
> > >>>>
> > >>>> The series you are sending will enable QEMU to talk to tpmback 
> > >>>> directly.
> > >>>> This is the best solution when QEMU is running inside domain 0,
> > >>>> because it is not currently a good idea to use Linux's tpmfront
> > >>>> driver to talk to each guest's vTPM domain.
> > >>>>
> > >>>> When QEMU is run inside a stub domain, there are a few more
> > >>>> things to
> > >>>> consider:
> > >>>>
> > >>>>     * This stub domain will not have domain 0; the vTPM must bind
> > >>>> to another
> > >>>>       domain ID.
> > >>>>     * It is possible to use the native TPM driver for the stub
> > >>>> domain (which may
> > >>>>       either run Linux or mini-os) because there is no conflict
> > >>>> with a real
> > >> TPM
> > >>>>       software stack running inside domain 0
> > >>>>
> > >>>> Supporting this feature requires more granularity in the TPM
> > >>>> backend changes.
> > >>>> The vTPM domain's backend must be able to handle:
> > >>>>
> > >>>>     (1) guest domains which talk directly to the vTPM on their
> > >>>> own
> > behalf
> > >>>>     (2) QEMU processes in domain 0
> > >>>>     (3) QEMU domains which talk directly to the vTPM on behalf of
> > >>>> a guest
> > >>>>
> > >>>> Cases (1) and (3) are already handled by the existing tpmback if
> > >>>> the proper domain ID is used.
> > >>>>
> > >>>> Your patch set currently breaks case (1) and (3) for HVM guests
> > >>>> while enabling case (2).  An alternate solution that does not
> > >>>> break these cases while enabling case (2) is preferable.
> > >>>>
> > >>>> My thoughts on extending the xenstore interface via an example:
> > >>>>
> > >>>> Domain 0: runs QEMU for guest A
> > >>>> Domain 1: vtpmmgr
> > >>>> Domain 2: vTPM for guest A
> > >>>> Domain 3: HVM guest A
> > >>>>
> > >>>> Domain 4: vTPM for guest B
> > >>>> Domain 5: QEMU stubdom for guest B Domain 6: HVM guest B
> > >>>>
> > >>>> /local/domain/2/backend/vtpm/3/0/*: backend A-PV
> > >>>> /local/domain/3/device/vtpm/0/*: frontend A-PV
> > >>>>
> > >>>> /local/domain/2/backend/vtpm/0/3/*: backend A-QEMU
> > >>>> /local/domain/0/qemu-device/vtpm/3/*: frontend A-QEMU  (uses
> > >>>> XenDevOps)
> > >>>
> > >>> I think '/local/domain/0/frontend/vtpm/3/0' is much better.
> > >>> Similar as some backend in Qemu running in Domain-0, it always
> > >>> Stores as
> > >> '/local/domain/0/backend/qdisk/1 .etc'. I will also modify QEMU
> > >> code to make '/local/domain/0/frontend/DEVICE'
> > >>> As a general design for general QEMU frontend running in Domain-0.
> > >>>
> > >>> For this example,
> > >>> Domain 0: runs QEMU for guest A
> > >>> Domain 1: vtpmmgr
> > >>> Domain 2: vTPM for guest A
> > >>> Domain 3: HVM guest A
> > >>>
> > >>> I will design XenStore as following:
> > >>>
> > >>> ## XenStore >> ###
> > >>> local = ""
> > >>>    domain = ""
> > >>>     0 = ""
> > >>>      frontend = ""
> > >>>       vtpm = ""
> > >>>        3 = ""
> > >>>         0 = ""
> > >>>         backend = "/local/domain/2/backend/vtpm/3/0"
> > >>>         backend-id = "2"
> > >>>         state = "*"
> > >>>         handle = "0"
> > >>>         ring-ref = "*"
> > >>>         event-channel = "*"
> > >>>         feature-protocol-v2 = "1"
> > >>>      backend = ""
> > >>>       qdisk = ""
> > >>>        [...]
> > >>>       console = ""
> > >>>       vif = ""
> > >>>        [...]
> > >>>     2 = ""
> > >>>      [...]
> > >>>      backend = ""
> > >>>       vtpm = ""
> > >>>        3 = ""
> > >>>         0 = ""
> > >>>          frontend = "/local/domain/0/frontend/vtpm/3/0"
> > >>>          frontend-id = "0" ('0', frontend is running in Domain-0)
> > >>>          [...]
> > >>>     3 = ""
> > >>>      [...]
> > >>>      device = "" (frontend device, the backend is running in QEMU/.etc)
> > >>>       vkbd = ""
> > >>>        [...]
> > >>>       vif = ""
> > >>>        [...]
> > >>> ## XenStore << ##
> > >>>
> > >>> Then, the source code can read xenStore to get frontend-id or
> > >>> frontend
> > >> directly.
> > >>> If you agree with it, I will modify source code to align with
> > >>> above XenStore
> > >> design.
> > >>
> > >> I like the /local/domain/0/frontend/* path better than my initial
> > >> qemu suggestion, but I think the domain ID used should be the
> > >> domain ID of the vTPM domain, similar to how backends for the qemu
> > >> stubdom are done.  In this example, the paths would be
> > >> "/local/domain/0/frontend/vtpm/2/0" and
> > "/local/domain/2/backend/vtpm/0/0".
> > >
> > > Thanks Graaf.

Stefano, 
        Do you have some idea of this XenStore design? QEMU is running in Dom0 
as usual, 
, and the frontend driver is enabled in QEMU. Thanks.
  
Quan

> > >   Domain 0: runs QEMU for guest A
> > >   Domain 1: vtpmmgr
> > >   Domain 2: vTPM for guest A
> > >   Domain 3: HVM guest A
> > >
> > > /local/domain/0/frontend/vtpm/2/0
> > > /local/domain/2/backend/vtpm/0/0
> > >
> > > I have one question. How does Domain 3 read/write XenStore? Such as
> > > /local/domain/0/frontend/vtpm/2/*
> > >
> > > In QEMU frontend, it can get Domain ID -- '3', but it does know the
> > > backend
> > domain ID is '2'.
> >
> > I would have this as a parameter in describing the vTPM device,
> > similar to how the file name of the disk images are described.  The
> > actual command line or configuration for QEMU might use a domain name
> > instead of a domain ID.  I would check to see how disk or network
> > backend domains are handled (I assume they are supported by
> > qemu-in-dom0; I don't recall testing that setup
> > myself)
> 
> .
> Thanks.
> I will also check to see how disk or network backend domains are handled.
> Any result, I will send out.
> 
> Quan
> 
> >
> > >> This avoids introducing a dependency on the domain ID of the guest
> > >> in a connection that does not directly involve that domain.  If a
> > >> guest ever needs two vTPMs or multiple guests share a vTPM, this
> > >> method of constructing the paths will avoid unneeded conflicts
> > >> (though I don't expect either of these situations to be normal).
> > >>
> > >>>>
> > >>>> /local/domain/4/backend/vtpm/5/0/*: backend B-QEMU
> > >>>> /local/domain/5/device/vtpm/0/*: frontend B-QEMU
> > >>>>
> > >>>> /local/domain/4/backend/vtpm/6/0/*: backend B-PV
> > >>>> /local/domain/6/device/vtpm/0/*: frontend B-PV
> > >>>>
> > >>>> Connections A-PV, B-PV, and B-QEMU would be created in the same
> > >>>> manner as the existing "xl vtpm-attach" command does now.  If the
> > >>>> HVM guest is not running Linux with the Xen tpmfront.ko loaded,
> > >>>> the A-PV and B-PV devices will remain unconnected; this is fine.
> > >>>>
> > >>>> Connection A-QEMU has a modified frontend state path to prevent
> > >>>> Linux from attaching its own TPM driver to the guest's TPM.
> > >>>
> > >>> Your design is working. For this case,
> > >>>
> > >>> Domain 4: vTPM for guest B
> > >>> Domain 5: QEMU stubdom for guest B Domain 6: HVM guest B
> > >>>
> > >>> As my understanding, Xl tools will create Donmain 5 as a PV domain.
> > >>> It works as Existing solutions. I think it can extend with libvirt too.
> > >>> You can make Domain 6 connected Domain 5 by QEMU command line
> > >> options,
> > >>> and it Is quite similar to TPM passthrough.
> > >>
> > >> Yes, this setup should be possible today once the proper device
> > >> configuration is added to the QEMU configuration.
> > >>
> > >>> So in this case, we don't care  '-PV' or '-Qemu'. also '-pv'/'-QEMU'
> > >>> are
> > >> confusing in XenStore.
> > >>
> > >> Yes; this was one reason I did not want to introduce an "HVM" type
> > >> in
> > Xenstore.
> > >>
> > >
> > > Hope someone can implement it...:)
> > >
> > > Intel
> > > Quan Xu
> > >
> > >> --
> > >> Daniel De Graaf
> > >> National Security Agency
> > >
> > >
> >
> >
> > --
> > Daniel De Graaf
> > National Security Agency
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.