[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 0/2] x86/pci: MMCFG improvements and always use it if available


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 7 Jan 2026 21:02:24 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QtO4dyY2bk0OojaSqZIHZkJptRIqFJUH8tYD5NasuEU=; b=OOU4SbIzhXitcBGGMEi6+GiJmxZjSbl4tor10BYvYcmF/q7znTrtLtFzw9O8jJ6f2CS2Ki76l7Lshj/7qPHwkYRI4SUqy86p/ptgrZ9bvasABJnDXiyHr7KsHDl3H1YLZha1zN88Z++Dc2cxLKnD2IpvakmJRypzy/ISeGcpWWldQosFVLKvkek8fkIuIdC5A8TdEHwxhFD0kI12WFsmhnYQgU8mljLr7PlLvrZa9C48YSH3G2JvTHQ95Vme3TF6TMK/yEEfhhL+MYVXogaXKT0fjgnr3bhwJf6R0GstsRgdnFaIkhFDn8H6aYVDhDTEkXp0YAZV/flvYi4+jVUf0w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=VsTxTBwzVhxj0G/zPiq6MeI7UqjHEOlJC11czgQR/SmwvtTCogCvQ3dCtHoyfX9TTg43bFn1qj36sG6GeJNiCqfG7llAo81ZoPrUngoZOlpowj4vlCNfPWPqTwf3pXNhYicGcsGA5JD047Z70CsIIP4a3HT7njYSQNkCWOyFhPHtjtShk4B0FsoFrvD34VBxQx5ztnK4SydQqzaKEmKJLXoMe5O3RzAhCfZg9wgW+IDGH2NYKAv4vB3KNVMi/A8hOdtat+FVpoVA+WGrVhlggoV8Ae32Db8bnT3w7G/ZPfzU1t1VIC+OrdMR1jHW8GttdD44K2IkZUY1kzsrKJ4CJQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Teddy Astie <teddy.astie@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Wed, 07 Jan 2026 20:02:43 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Jan 07, 2026 at 06:07:56PM +0000, Andrew Cooper wrote:
> On 07/01/2026 5:58 pm, Teddy Astie wrote:
> > Le 07/01/2026 à 18:25, Roger Pau Monné a écrit :
> >> On Wed, Jan 07, 2026 at 04:54:55PM +0000, Teddy Astie wrote:
> >>> Currently, Xen uses legacy method to access the configuration space 
> >>> unless the
> >>> access cannot be made with it, where Xen fallbacks to MMCFG. This is not 
> >>> really
> >>> great, as MMCFG is more flexible and doesn't require a dedicated lock, so 
> >>> it would
> >>> be preferable to use it whenever possible.
> >>>
> >>> Teddy Astie (2):
> >>>    x86/pci: Improve pci_mmcfg_{read,write} error handling
> >>>    x86/pci: Prefer using mmcfg for accessing configuration space
> >> AFAICT Linux is using the same approach as Xen to perform PCI
> >> accesses.
> 
> I think you mean "Xen inherited it's PCI code from Linux". :)
> 
> >>   Registers below 256 on segment 0 are accessed using the
> >> legacy method (IO ports), while the extended space is accessed using
> >> MMCFG.  Do you know the reason for this?  I fear there might be
> >> legacy devices/bridges (or root complexes?) where MMCFG is not
> >> working as expected?
> >>
> > There is apparently a errata on some K8 chipset according to FreeBSD 
> > code that uses MMCFG whenever possible.
> >
> > https://github.com/freebsd/freebsd-src/blob/main/sys/amd64/pci/pci_cfgreg.c#L261-L277
> 
> Using MMCFG is *far* more efficient than IO ports, not least because we
> can avoid serialising accesses across the system.
> 
> If it really is only some K8's which were the problem then lets go the
> FreeBSD way.  Both Linux and Xen both talk about AMD Fam10h issues,
> which is the fact that early AMD CPUs only allow MMCFG accesses for
> MOV-EAX instructions.

Sorry if my previous reply made it look the opposite, I'm all fine
with switching to MMCFG by default, but we need the K8 workaround,
plus this needs to be noted in the commit message.

I wonder if we could use alternative calls to patch MMCFG only access
on capable systems, thus removing the check for the fallback legacy
access on such systems.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.