[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Security support status of xnf(4) and xbf(4)



On 3/29/22 04:16, Claudio Jeker wrote:
> On Mon, Mar 28, 2022 at 04:38:33PM -0400, Demi Marie Obenour wrote:
>> On 3/28/22 10:39, Mark Kettenis wrote:
>>>> Date: Mon, 28 Mar 2022 09:51:22 -0400
>>>> From: Demi Marie Obenour <demi@xxxxxxxxxxxxxxxxxxxxxx>
>>>>
>>>> On 3/27/22 21:45, Damien Miller wrote:
>>>>> On Fri, 25 Mar 2022, Demi Marie Obenour wrote:
>>>>>
>>>>>> Linux’s netfront and blkfront drivers recently had a security
>>>>>> vulnerability (XSA-396) that allowed a malicious backend to potentially
>>>>>> compromise them.  In follow-up audits, I found that OpenBSD’s xnf(4)
>>>>>> currently trusts the backend domain.  I reported this privately to Theo
>>>>>> de Raadt, who indicated that OpenBSD does not consider this to be a
>>>>>> security concern.
>>>>>>
>>>>>> This is obviously a valid position for the OpenBSD project to take, but
>>>>>> it is surprising to some (such as myself) from the broader Xen
>>>>>> ecosystem.  Standard practice in the Xen world is that bugs in frontends
>>>>>> that allow a malicious backend to cause mischief *are* considered
>>>>>> security bugs unless there is explicit documentation to the contrary.
>>>>>> As such, I believe this deserves to be noted in xnf(4) and xbf(4)’s man
>>>>>> pages.  If the OpenBSD project agrees, I am willing to write a patch,
>>>>>> but I have no experience with mandoc so it might take a few tries.
>>>>>
>>>>> Hang on, what is a "malicious backend" in this context? Is it something
>>>>> other than the Xen Hypervisor? If not, then it seems not to be a useful
>>>>> attack model, as the hypervisor typically has near-complete access to
>>>>> guests' memory and CPU state.
>>>>
>>>> The backend can run in any Xen VM.  It often runs in dom0, but it
>>>> is not required to, and in Qubes OS the network backend never runs
>>>> in dom0.  Unless it runs in dom0, it has no access to frontend memory,
>>>> except for memory the frontend has explicitly given it access to via
>>>> grant tables.
>>>
>>> So this is somewhat similar to the situation on sun4v (Sun's
>>> virtualization of the SPARC architecture).  When writing the vnet(4)
>>> and vdsk(4) drivers for OpenBSD, I did consider the implications of
>>> those drivers talking to a "malicious" domain.  the SPARC hypervisor
>>> implements a concept similar to grant tables.  It is fairly obvious
>>> that any memory you grant access to should be considered insecure.
>>> This means that you either have to make a copy of the data or revoke
>>> access to the shared memory through some sort of Hypervisor call that
>>> implements a synchronization point of some sorts.  Otherwise you and
>>> up TOCTOU issues all over the place.  But this obviously has
>>> significant performance consequences.  For vnet(4) I decided that an
>>> extra copy was worth doing and the only reasonable way of doing things
>>> given how OpenBSD's mbuf layer works.  But for vdsk(4) I decided to
>>> trust the other domain as there is no way to prevent it from feeding
>>> you compromised data.  Full disk encryption doesn't really solve the
>>> problem unless you have a way to securely verify the bootloader.
>>
>> In Qubes OS, xbf(4) devices are configurable.  While all of them are
>> provided by dom0 (which is trusted) by default, it is possible to
>> attach devices that are *not* provided by dom0, and these devices
>> should not be trusted.
>>
>>> Personally I think it might be beneficial for us to turn xnf(4) into
>>> what we colloquially call a "bcopy" network driver.  But folks who
>>> actually use xen may find the performance impact of doing this
>>> unacceptable and decide to trust the backend instead.
>>
>> You actually don’t have to do that.  The Xen network protocol
>> requires the backend to drop access to the buffer before giving it
>> to the frontend, so the frontend only needs to ensure that it cannot
>> regain access.  This will fail if the backend still has access, but
>> that is a bug in the backend, in which case you should shut down the
>> interface.  So there should not be any significant performance impact.
>>
>> If you are curious about how Linux does this, you can look at
>> drivers/xen/grant-table.c, drivers/net/xen-netfront.c, and
>> drivers/block/xen-blkfront.c from the Linux source.  They are
>> dual licensed GPL/MIT so there should not be licensing issues there.
>> Be sure to use a version at or after “xen/netfront: react properly to
>> failing gnttab_end_foreign_access_ref()” and the other XSA-396 patches.
> 
> So how does xen manage to limit access to less than a page size?
> The hardware on x86 does not give you byte precise mappings for granting
> memory.
> An mbuf is 256 bytes and of those 256 bytes less then that is used for
> data. Still for dma the full 4k page needs to be granted to the host.
> The only way this can be done is by memcpy all data into individual pages.
> The same is true for the most common mbuf cluster size of 2k.

I was not aware that the OpenBSD mbuf layer could not handle the
approach I described.  Sorry for the misunderstanding.

> So yes, this will be a bcopy ethernet driver and by that will be on the
> same level of crappyness as bce(4) and old old old realtek.

Mark, is there any way this could be made tunable at runtime?

> If you can trust the host don't run your vm on that host.

As Marek has stated, in Qubes OS the network backend is not considered
to be part of the host.  The host has no network access whatsoever.

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

Attachment: OpenPGP_0xB288B55FFF9C22C1.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.