This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH][ACM] kernel enforcement of vbd policies via blkb

To: Harry Butterworth <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH][ACM] kernel enforcement of vbd policies via blkback driver
From: Reiner Sailer <sailer@xxxxxxxxxx>
Date: Thu, 27 Jul 2006 11:37:23 -0400
Cc: Andrew Warfield <andrew.warfield@xxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, xense-devel@xxxxxxxxxxxxxxxxxxx, Bryan D Payne <bdpayne@xxxxxxxxxx>, ncmike@xxxxxxxxxx
Delivery-date: Thu, 27 Jul 2006 08:37:53 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1153964442.10332.121.camel@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

> > Getting back to Reiner's point about block AC checks in the backend
> > drivers:  I think that if you trust the backend code sufficiently to
> > _have_ the AC check in the first place, then you trust it implicitly
> > to make correct use of page sharing etc.  So why not implement the
> > tests for (a) permission to talk to the specified frontend, and (b)
> > permission for that frontend to talk to the specified disk at the
> > store level (which is where the two drivers are negotiating things
> > anyway), and just use existing in-hypervisor AC mechanisms to control
> > whether the backend is allowed to map the comms page and connect event
> > channels.
> I might be missing the point in the above paragraph.
> I'm not sure that we have to trust the BE at all.  It's possible to
> insert a trusted intermediate encrypt/decrypt/versioning/digital
> signature layer so we don't have to trust the BE with resource isolation
> or returning the right data and the FE can use mirroring for redundancy
> against data lost by the BE.
> So I think it's better for both a and b to be done by a trusted third
> party which is smaller, easier to verify and subject to less frequent
> change than a whole kernel.
> Harry.

Regarding the suggestion not to trust BE/device domain (this seems to be a very interesting discussion point):

I encourage to build BE/device domains so that they are trusted. To start the discussion, I state some of MY PERSONAL thoughts regarding the attacker model/trust model for sHype/ACM that support trusted BE/device domains.

Simplified Commercial-grade Guarantees:
i.  Confine workloads and resources so that viruses or other integrity problems don't swap from one workload type into another
ii. Confine workloads and resources so that data does not leak from one workload to another
iii.Confinement will be no better or worse than the core hypervisor isolation (depends on the hypervisor/hardware sHype operates on, here Xen)

Simplified Attacker model / trust model for above guarantees:
i. Do not rely on cooperation of any user domain (ensure confinement even if user domains go rogue)
ii.Rely/trust on device domains and other domains that host multiple workload types to keep them separate

Risk management:
i. if a trusted domain becomes compromised, this affects only the workload types that it handles
ii.if a trusted domain becomes compromised, the workload types it handles can no longer be guaranteed to be confined against each other

So I am actually encouraging to trust minimal device domains that are carefully engineered if they serve different workloads.

Why? Here are my reasons (for discussion):

a) guest domains shall not be trusted (this is the whole point of having hypervisor level coarse grained security; it does not assume security in guest OS)
b) device domains can be generic and used by many guest domains, they run a very limited number of processes
   --> evaluation in the long-term seems most feasible for small domains with limited functionality that doesn't change often
c) IF you don't trust the device domain, then it can see only encrypted/signed data and you don't get availability (assuming you don't trust any device domain, then replication does not help availability because all can deny access at will in coordinated attacks)
   --> you need for each workload type another (trusted) domain that encrypts/signs
   --> you inherit performance overhead and key/other management overhead
   --> you introduce multiple trusted domains instead of a single one

My feeling is:
a) trusting a small number of specialized domains (device domains, security domains) scales because such domains should remain pretty stable and can run minimally configured kernels etc.
b.1) when people write backend drivers, they manage to handle much more complex things than a function that resolves access control
b.2) starting to get people to handle security the same way they handle memory management in their code seems a good step towards consolidating security (this is probably a quite controversial statement and valid mainly in the context of commercial COTS systems)

Concluding/summarizing: the device domain (BE) IS a trusted third party hosting shared hardware. I encourage discussions about why or under which circumstances moving the trust into yet another third party helps.

Xen-devel mailing list
<Prev in Thread] Current Thread [Next in Thread>