[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V5 00/22] IOREQ feature (+ virtio-mmio) on Arm



On Thu, 28 Jan 2021 at 20:10, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>
> On 28/01/2021 18:16, Julien Grall wrote:
> > Hi Andrew,
> >
> > On 28/01/2021 18:10, Andrew Cooper wrote:
> >> On 28/01/2021 17:44, Julien Grall wrote:
> >>>
> >>>
> >>> On 28/01/2021 17:24, Stefano Stabellini wrote:
> >>>> On Thu, 28 Jan 2021, Julien Grall wrote:
> >>>>> On 25/01/2021 19:08, Oleksandr Tyshchenko wrote:
> >>>>>    > Patch series [8] was rebased on recent "staging branch"
> >>>>>> (5e31789 tools/ocaml/libs/xb: Do not crash after xenbus is
> >>>>>> unmapped) and
> >>>>>> tested on
> >>>>>> Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with virtio-mmio
> >>>>>> disk
> >>>>>> backend [9]
> >>>>>> running in driver domain and unmodified Linux Guest running on
> >>>>>> existing
> >>>>>> virtio-blk driver (frontend). No issues were observed. Guest domain
> >>>>>> 'reboot/destroy'
> >>>>>> use-cases work properly. Patch series was only build-tested on x86.
> >>>>>
> >>>>> I have done basic testing with a Linux guest on x86 and I didn't
> >>>>> spot any
> >>>>> regression.
> >>>>>
> >>>>> Unfortunately, I don't have a Windows VM in hand, so I can't confirm
> >>>>> if there
> >>>>> is no regression there. Can anyone give a try?
> >>>>>
> >>>>> On a separate topic, Andrew expressed on IRC some concern to
> >>>>> expose the
> >>>>> bufioreq interface to other arch than x86. I will let him explain
> >>>>> his view
> >>>>> here.
> >>>>>
> >>>>> Given that IOREQ will be a tech preview on Arm (this implies the
> >>>>> interface is
> >>>>> not stable) on Arm, I think we can defer the discussion past the
> >>>>> freeze.
> >>>>
> >>>> Yes, I agree that we can defer the discussion.
> >>>>
> >>>>
> >>>>> For now, I would suggest to check if a Device Model is trying to
> >>>>> create an
> >>>>> IOREQ server with bufioreq is enabled (see ioreq_server_create()).
> >>>>> This would
> >>>>> not alleviate Andrew's concern, however it would at allow us to
> >>>>> judge whether
> >>>>> the buffering concept is going to be used on Arm (I can see some use
> >>>>> for the
> >>>>> Virtio doorbell).
> >>>>>
> >>>>> What do others think?
> >>>>
> >>>> Difficult to say. We don't have any uses today but who knows in the
> >>>> future.
> >>>
> >>> I have some ideas, but Andrew so far objects on using the existing
> >>> bufioreq interface for good reasons. It is using guest_cmpxchg() which
> >>> is really expensive.
> >>
> >> Worse.  Patch 5 needs to switch cmpxchg() to guest_cmpxchg() to avoid
> >> reintroducing XSA-295, but doesn't.
> >
> > The memory is only shared with the emulator (we don't allow the legacy
> > interface on Arm).
>
> Excellent.
>
> > So why do you think it is re-introducing XSA-295?
>
> Because the Xen security model considers "stubdomain can DoS Xen" a
> security issue.
>
> Yes - I know that right now, it will only be the hardware domain which
> can use acquire_resource on ARM, but eventually we'll fix the
> refcounting bug, and lifting the "translate && !hwdom" restriction would
> be the thing that actually reintroduces XSA-295.

By refcounting bugs, are you referring to mapping foreign pages in a
domain? If so, on Arm, we always increment the reference counter on
the foreign page during the map request. The reference will be
released when the emulator unmap it. Therefore, we don't need the
restriction "translate && !hwdom" (see patch #16 [1]).

In addition to that, patch #13 [2] is replacing the cmpxchg() with
guest_cmpxchg() to prevent XSA-295 from reappearing.

So unless I am missing something, the two security issues you
mentioned are already covered by this series.

Cheers,

[1]  
https://lore.kernel.org/xen-devel/1611601709-28361-17-git-send-email-olekstysh@xxxxxxxxx/
[2] 
https://lore.kernel.org/xen-devel/1611601709-28361-14-git-send-email-olekstysh@xxxxxxxxx/



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.