[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Problems in PV dom0 on recent x86 hardware



On Tue, Jul 09, 2024 at 08:24:20AM +0200, Jan Beulich wrote:
> On 08.07.2024 23:30, Jason Andryuk wrote:
> > On 2024-07-08 05:12, Jan Beulich wrote:
> >> On 08.07.2024 11:08, Roger Pau Monné wrote:
> >>> On Mon, Jul 08, 2024 at 10:37:22AM +0200, Jan Beulich wrote:
> >>>> On 08.07.2024 10:15, Jürgen Groß wrote:
> >>>>> I've got an internal report about failures in dom0 when booting with
> >>>>> Xen on a Thinkpad P14s Gen 3 AMD (kernel 6.9).
> >>>>>
> >>>>> With some debugging I've found that the UCSI driver seems to fail to
> >>>>> map MFN feec2 as iomem, as the hypervisor is denying this mapping due
> >>>>> to being part of the MSI space. The mapping attempt seems to be the
> >>>>> result of an ACPI call of the UCSI driver:
> >>>>>
> >>>>> [   44.575345] RIP: e030:xen_mc_flush+0x1e8/0x2b0
> >>>>> [   44.575418]  xen_leave_lazy_mmu+0x15/0x60
> >>>>> [   44.575425]  vmap_range_noflush+0x408/0x6f0
> >>>>> [   44.575438]  __ioremap_caller+0x20d/0x350
> >>>>> [   44.575450]  acpi_os_map_iomem+0x1a3/0x1c0
> >>>>> [   44.575454]  acpi_ex_system_memory_space_handler+0x229/0x3f0
> >>>>> [   44.575464]  acpi_ev_address_space_dispatch+0x17e/0x4c0
> >>>>> [   44.575474]  acpi_ex_access_region+0x28a/0x510
> >>>>> [   44.575479]  acpi_ex_field_datum_io+0x95/0x5c0
> >>>>> [   44.575482]  acpi_ex_extract_from_field+0x36b/0x4e0
> >>>>> [   44.575490]  acpi_ex_read_data_from_field+0xcb/0x430
> >>>>> [   44.575493]  acpi_ex_resolve_node_to_value+0x2e0/0x530
> >>>>> [   44.575496]  acpi_ex_resolve_to_value+0x1e7/0x550
> >>>>> [   44.575499]  acpi_ds_evaluate_name_path+0x107/0x170
> >>>>> [   44.575505]  acpi_ds_exec_end_op+0x392/0x860
> >>>>> [   44.575508]  acpi_ps_parse_loop+0x268/0xa30
> >>>>> [   44.575515]  acpi_ps_parse_aml+0x221/0x5e0
> >>>>> [   44.575518]  acpi_ps_execute_method+0x171/0x3e0
> >>>>> [   44.575522]  acpi_ns_evaluate+0x174/0x5d0
> >>>>> [   44.575525]  acpi_evaluate_object+0x167/0x440
> >>>>> [   44.575529]  acpi_evaluate_dsm+0xb6/0x130
> >>>>> [   44.575541]  ucsi_acpi_dsm+0x53/0x80
> >>>>> [   44.575546]  ucsi_acpi_read+0x2e/0x60
> >>>>> [   44.575550]  ucsi_register+0x24/0xa0
> >>>>> [   44.575555]  ucsi_acpi_probe+0x162/0x1e3
> >>>>> [   44.575559]  platform_probe+0x48/0x90
> >>>>> [   44.575567]  really_probe+0xde/0x340
> >>>>> [   44.575579]  __driver_probe_device+0x78/0x110
> >>>>> [   44.575581]  driver_probe_device+0x1f/0x90
> >>>>> [   44.575584]  __driver_attach+0xd2/0x1c0
> >>>>> [   44.575587]  bus_for_each_dev+0x77/0xc0
> >>>>> [   44.575590]  bus_add_driver+0x112/0x1f0
> >>>>> [   44.575593]  driver_register+0x72/0xd0
> >>>>> [   44.575600]  do_one_initcall+0x48/0x300
> >>>>> [   44.575607]  do_init_module+0x60/0x220
> >>>>> [   44.575615]  __do_sys_init_module+0x17f/0x1b0
> >>>>> [   44.575623]  do_syscall_64+0x82/0x170
> >>>>> [   44.575685] 1 of 1 multicall(s) failed: cpu 4
> >>>>> [   44.575695]   call  1: op=1 result=-1 
> >>>>> caller=xen_extend_mmu_update+0x4e/0xd0
> >>>>> pars=ffff888267e25ad0 1 0 7ff0 args=9ba37a678 80000000feec2073
> >>>>>
> >>>>> The pte value of the mmu_update call is 80000000feec2073, which is 
> >>>>> rejected by
> >>>>> the hypervisor with -EPERM.
> >>>>>
> >>>>> Before diving deep into the UCSI internals, is it possible that the 
> >>>>> hypervisor
> >>>>> needs some update (IOW: could it be the mapping attempt should rather be
> >>>>> honored, as there might be an I/O resources at this position which dom0 
> >>>>> needs
> >>>>> to access for using the related hardware?)
> >>>>
> >>>> Adding to Andrew's reply: Is there any BAR in the system covering that 
> >>>> address?
> >>>> Or is it rather ACPI "making up" that address (which would remind me of 
> >>>> IO-APIC
> >>>> space being accessed by certain incarnations of ACPI, resulting in 
> >>>> similar
> >>>> issues)?
> >>>
> >>> So you think ACPI is using some kind of backdoor to access the local
> >>> APIC registers?
> >>
> >> No, I'm wondering if they're trying to access *something*. As it stands we
> >> don't even know what kind of access is intended; all we know is that 
> >> they're
> >> trying to map that page (and maybe adjacent ones).
> > 
> >  From the backtrace, it looks like the immediate case is just trying to 
> > read a 4-byte version:
> > 
> >  >>>> [   44.575541]  ucsi_acpi_dsm+0x53/0x80
> >  >>>> [   44.575546]  ucsi_acpi_read+0x2e/0x60
> >  >>>> [   44.575550]  ucsi_register+0x24/0xa0
> >  >>>> [   44.575555]  ucsi_acpi_probe+0x162/0x1e3
> > 
> > int ucsi_register(struct ucsi *ucsi)
> > {
> >          int ret;
> > 
> >          ret = ucsi->ops->read(ucsi, UCSI_VERSION, &ucsi->version,
> >                                sizeof(ucsi->version));
> > 
> > ->read being ucsi_acpi_read()
> > 
> > However, the driver also appears write to adjacent addresses.
> 
> There are also corresponding write functions in the driver, yes, but
> ucsi_acpi_async_write() (used directly or indirectly) similarly calls
> ucsi_acpi_dsm(), which wires through to acpi_evaluate_dsm(). That's
> ACPI object evaluation, which isn't obvious without seeing the
> involved AML whether it might write said memory region. The writing
> done in the write function(s) looks to be
> 
>       memcpy(ua->base + offset, val, val_len);
> 
> with their read counterpart being
> 
>       memcpy(val, ua->base + offset, val_len);
> 
> where ua->base may well be an entirely different address (looks like
> it's the first of the BARs as per ucsi_acpi_probe()).
> 
> If acpi_evaluate_dsm() would only ever read the region, an option (if
> all else fails) might be to similarly (to what we do for IO-APICs)
> permit read accesses / mappings (by inserting the range into
> mmio_ro_ranges). Yet of course first we need to better understand
> what's actually going on here.

When accessing from the CPU, what's in this range apart from the first
page (0xfee00) being the APIC MMIO window in xAPIC mode?

Regards, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.