[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Problems in PV dom0 on recent x86 hardware


  • To: Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jason Andryuk <jason.andryuk@xxxxxxx>
  • Date: Mon, 8 Jul 2024 17:30:27 -0400
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DnuIbgpYX5bIBH0nPabc2BT+WMN6OrxaG9dA6wsjb9Y=; b=Zj/5JLyrrvaWSap+84DphU+jxcnjwF1yJFX9T9fM272Sk4uvX/rzNN7JcZNZa/LZQ9PZToaERJwls4q96N66BEEodFf/0VtHMZJIKbrEdAy1Sp4kYG3QVuhV4B2OmgZv6UtoJXvA/cD4jzdA7rfdc2oDdLQ/OBel75axzijDrgm7HmR5LpUaDWecOHnDLryNEOLhUX62rXz8wG/m+qSytzKyANPH3R8mml5n9/xb/j24lnvkGNN0qblBEu+Aez0kdz8h6lrcxgfgirJhGa11HvdyWY1jvFYm5/QQLpAxyXuS3TEuJywUptjdQI0a+XFMNcR9bnbHyrcG+uYRaVfN+w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fxE1mN278XrMHpVs9u9AqjohlK9DGGlguSMgrS5o1WlREsevBaDWuo46CUwj4uIo61WRTeP/9YAzooYIlpGiufFhZUcPeQCqqLz6udIQqW9+0F2uEzrJuzJOsAfQsoizW9Ll95bqWrfglwD8YRJ3aWwvhbzsJORd9CuFM8qKY7/AKdu+KbbNGSIFVgwdgzZqLWZDAizjvRMLkhNDyDTeIVGDq4GkspNzKrn+UNUEuX2rLnH+ShqQSAx3rghG/OA4WfNjtpz25jShW18C380lUDVD4rtRV/EYJXYbgTEWe5r8OadLuU1hqekMzmljOEGq+9P7yIka8HkGJlH6UzAJOw==
  • Cc: Jürgen Groß <jgross@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 08 Jul 2024 21:30:43 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 2024-07-08 05:12, Jan Beulich wrote:
On 08.07.2024 11:08, Roger Pau Monné wrote:
On Mon, Jul 08, 2024 at 10:37:22AM +0200, Jan Beulich wrote:
On 08.07.2024 10:15, Jürgen Groß wrote:
I've got an internal report about failures in dom0 when booting with
Xen on a Thinkpad P14s Gen 3 AMD (kernel 6.9).

With some debugging I've found that the UCSI driver seems to fail to
map MFN feec2 as iomem, as the hypervisor is denying this mapping due
to being part of the MSI space. The mapping attempt seems to be the
result of an ACPI call of the UCSI driver:

[   44.575345] RIP: e030:xen_mc_flush+0x1e8/0x2b0
[   44.575418]  xen_leave_lazy_mmu+0x15/0x60
[   44.575425]  vmap_range_noflush+0x408/0x6f0
[   44.575438]  __ioremap_caller+0x20d/0x350
[   44.575450]  acpi_os_map_iomem+0x1a3/0x1c0
[   44.575454]  acpi_ex_system_memory_space_handler+0x229/0x3f0
[   44.575464]  acpi_ev_address_space_dispatch+0x17e/0x4c0
[   44.575474]  acpi_ex_access_region+0x28a/0x510
[   44.575479]  acpi_ex_field_datum_io+0x95/0x5c0
[   44.575482]  acpi_ex_extract_from_field+0x36b/0x4e0
[   44.575490]  acpi_ex_read_data_from_field+0xcb/0x430
[   44.575493]  acpi_ex_resolve_node_to_value+0x2e0/0x530
[   44.575496]  acpi_ex_resolve_to_value+0x1e7/0x550
[   44.575499]  acpi_ds_evaluate_name_path+0x107/0x170
[   44.575505]  acpi_ds_exec_end_op+0x392/0x860
[   44.575508]  acpi_ps_parse_loop+0x268/0xa30
[   44.575515]  acpi_ps_parse_aml+0x221/0x5e0
[   44.575518]  acpi_ps_execute_method+0x171/0x3e0
[   44.575522]  acpi_ns_evaluate+0x174/0x5d0
[   44.575525]  acpi_evaluate_object+0x167/0x440
[   44.575529]  acpi_evaluate_dsm+0xb6/0x130
[   44.575541]  ucsi_acpi_dsm+0x53/0x80
[   44.575546]  ucsi_acpi_read+0x2e/0x60
[   44.575550]  ucsi_register+0x24/0xa0
[   44.575555]  ucsi_acpi_probe+0x162/0x1e3
[   44.575559]  platform_probe+0x48/0x90
[   44.575567]  really_probe+0xde/0x340
[   44.575579]  __driver_probe_device+0x78/0x110
[   44.575581]  driver_probe_device+0x1f/0x90
[   44.575584]  __driver_attach+0xd2/0x1c0
[   44.575587]  bus_for_each_dev+0x77/0xc0
[   44.575590]  bus_add_driver+0x112/0x1f0
[   44.575593]  driver_register+0x72/0xd0
[   44.575600]  do_one_initcall+0x48/0x300
[   44.575607]  do_init_module+0x60/0x220
[   44.575615]  __do_sys_init_module+0x17f/0x1b0
[   44.575623]  do_syscall_64+0x82/0x170
[   44.575685] 1 of 1 multicall(s) failed: cpu 4
[   44.575695]   call  1: op=1 result=-1 caller=xen_extend_mmu_update+0x4e/0xd0
pars=ffff888267e25ad0 1 0 7ff0 args=9ba37a678 80000000feec2073

The pte value of the mmu_update call is 80000000feec2073, which is rejected by
the hypervisor with -EPERM.

Before diving deep into the UCSI internals, is it possible that the hypervisor
needs some update (IOW: could it be the mapping attempt should rather be
honored, as there might be an I/O resources at this position which dom0 needs
to access for using the related hardware?)

Adding to Andrew's reply: Is there any BAR in the system covering that address?
Or is it rather ACPI "making up" that address (which would remind me of IO-APIC
space being accessed by certain incarnations of ACPI, resulting in similar
issues)?

So you think ACPI is using some kind of backdoor to access the local
APIC registers?

No, I'm wondering if they're trying to access *something*. As it stands we
don't even know what kind of access is intended; all we know is that they're
trying to map that page (and maybe adjacent ones).

From the backtrace, it looks like the immediate case is just trying to read a 4-byte version:

>>>> [   44.575541]  ucsi_acpi_dsm+0x53/0x80
>>>> [   44.575546]  ucsi_acpi_read+0x2e/0x60
>>>> [   44.575550]  ucsi_register+0x24/0xa0
>>>> [   44.575555]  ucsi_acpi_probe+0x162/0x1e3

int ucsi_register(struct ucsi *ucsi)
{
        int ret;

        ret = ucsi->ops->read(ucsi, UCSI_VERSION, &ucsi->version,
                              sizeof(ucsi->version));

->read being ucsi_acpi_read()

However, the driver also appears write to adjacent addresses.

Regards,
Jason



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.