[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] pvh/dom0: whitelist PVH Dom0 ACPI tables



On Tue, Feb 13, 2018 at 08:22:33AM -0700, Jan Beulich wrote:
> >>> On 13.02.18 at 16:11, <roger.pau@xxxxxxxxxx> wrote:
> > On Tue, Feb 13, 2018 at 06:41:14AM -0700, Jan Beulich wrote:
> >> >>> On 13.02.18 at 12:27, <roger.pau@xxxxxxxxxx> wrote:
> >> > On Tue, Feb 13, 2018 at 04:04:17AM -0700, Jan Beulich wrote:
> >> >> >>> On 13.02.18 at 10:59, <roger.pau@xxxxxxxxxx> wrote:
> >> >> > On Tue, Feb 13, 2018 at 02:29:08AM -0700, Jan Beulich wrote:
> >> >> >> >>> On 08.02.18 at 13:25, <roger.pau@xxxxxxxxxx> wrote:
> >> >> >> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> >> >> >> 
> >> >> >> A change like this should not come without description, providing a
> >> >> >> reason for the change. Otherwise how will someone wanting to
> >> >> >> understand the change in a couple of years actually be able to
> >> >> >> make any sense of it. This is in particular because I continue to be
> >> >> >> not fully convinced that white listing is appropriate in the Dom0
> >> >> >> case (and for the record I'm similarly unconvinced that black listing
> >> >> >> is the best choice, yet obviously we need to pick on of the two).
> >> >> > 
> >> >> > I'm sorry, I thought we agreed at the summit to convert this to
> >> >> > whitelisting because it was likely better to simply not expose unknown
> >> >> > ACPI tables to guests.
> >> >> 
> >> >> "to guests" != "to Dom0".
> >> >> 
> >> >> > I guess the commit message could be something like:
> >> >> > 
> >> >> > "The following list of whitelisted APIC tables are either known to 
> >> >> > work
> >> >> > or don't require any resources to be mapped in either the IO or the
> >> >> > memory space.
> >> >> 
> >> >> Even if the white listing vs black listing question wasn't still
> >> >> undecided, I think we should revert the patch in favor of one
> >> >> with a description. The one above might be fine with "ACPI" in
> >> >> place of "APIC" as far as tables actively white listed are
> >> >> concerned, but then it still remains open why certain tables
> >> >> haven't been included. I'm in particular worried about various
> >> >> APEI related tables, but invisibility of e.g. an IBFT could also
> >> >> lead to boot problems.
> >> > 
> >> > Regarding APEI I think ERST, EINJ and HEST could be passed through,
> >> > BERT however requires that the BOOT Error Region is mapped into Dom0
> >> > p2m.
> >> > 
> >> > Since PVH Dom0 creation still ends up in a panic, I see no problem in
> >> > adding those in follow up patches.
> >> > 
> >> > IBFT also looks safe to pass through.
> >> 
> >> But you realize I've named only the few that came to mind
> >> immediately?
> > 
> > Sure, what I have in this patch is just the minimal set (plus a few
> > others that seem completely fine) needed in order to boot on my two
> > test boxes.
> > 
> > I know we will certainly have to expand this, but I see no issue in
> > adding them as we go, the more that this is all still unused.
> 
> Unused - sure. But how will we learn which ones we need to add?

I already have a kind of drafted list, of which ones could be added,
which ones need some handlers in order to make sure relevant areas are
mapped and finally a list of tables that will never be exposed to
Dom0.

This is based on the tables currently known to Xen from the actblX.h
headers, there are probably more in the wild, even ones not documented
in http://www.uefi.org/acpi at all.

I can cleanup and send that list.

> Surely waiting for problem reports from the field is not an
> acceptable model.

That's last resort of course, but given that PVH Dom0 doesn't exist
yet I don't see it as bad to receive reports of missing tables (or
things not working as expected) during the experimental version of
it.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.