[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers


  • To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 29 Oct 2021 09:33:53 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=7GAx4LKQKNaHQoging8yT8bl0sE3U6jYHaKB6MVRuNM=; b=NL5iurLplJDJXlEjuQp1BIUFkfV9bAiN9SI6AfejKEWSefJa9fS3U3J49WcWPIIjAS7kN50FZ9q0aNv8ktKsVGEOCbvkZ6pSBZ2nNh7lO0t6CMsuj+74C2VM1ob4Fej6YgoV9Ns8Bk00mW+99ZfHnp6HjzA5wiDAHJAkeyLp0kscyMLl5XohgXtga3mpa4Z/no/Lm2AEMqlbJ30iM3eNmwjAWYMFE7v434vlYWWNaIbjTqqhprC0IO//Ua581bvALnFzZ8y8MP7eJR1YMIBfAcWt3jPMcblucEzEwJVVYCCRDPJ9/B3wZRIztlPAnanZ0lrLr2ki0J3QBIcgvBk/1Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QSbSsJLdx1D0Jx/myq+ZWDurlRrugxnRjf0QOtGppBvDqVw1gEz0zuTtImAH3982HYaRpb8vH6I/04wHKYT4XZKcFb+lcR33m4fXsAP+UTjou8jquK/ZrW9HEoyl1Jsvqz2/QxpzSiw4VG86ZsZrQzHWUS41Y4qI/nvs21RMlB7ff0kRdrIAmYnzDlemYyXB4SdQF0lqolbfL7Q5MSIzZYGUDC09tllOhti3O1IFNbpulNZTQMzsyHfU75GSqrvUhN4e3qXt1tUGU2R1BVgIWFy2Lic5P36WKiFUucbkNICAIuI0exdZDggtHEdjc5RHqySZ9JWyzku78gEBCsXo2A==
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 29 Oct 2021 07:34:28 +0000
  • Ironport-data: A9a23:F5Ae4q5WTOE5zaMZhGOYNgxRtIfAchMFZxGqfqrLsTDasY5as4F+v mNMDzqGOvuCYTTxfYgnb9jj9U5TscfWmt9qTARq/C1jHi5G8cbLO4+Ufxz6V8+wwmwvb67FA +E2MISowBUcFyeEzvuV3zyIQUBUjclkfJKlYAL/En03FVAMpBsJ00o5wrdh2N4w2rBVPivW0 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Zx o9Rtru/Qg0SAJaPgsk5aBN/OR9aMvgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr 7pCcmlLN03dwbLtqF64YrAEasALNs7kMZlZonh95TrYEewnUdbIRKCiCdpwgWlu35sVQq22i 8wxMTUxNg3+WiJ1amg6F68zn9qVnnjRbGgNwL6SjfVuuDWCpOBr65D2K8bccNGOQcRTn26bq 3jA8mC/BQsVXPSFwDqY9jS3h+nAnQvyQodUH7q9ntZ6jVvWymENBRk+UVqgveL/mkO4Q8hYK UEf5mwpt6da3GuvQ9rmVhu0ukmtuBIGRsFQGO037gKK4qfM6gPfDW8BJhZDddgnuckeVTEsk FiTkLvBDzF1rKecT37b876OtC6zIgAcN2pEbigBJTbp+PG6/tt11EiWCI8+Tujl1bUZBA0c3 RiE9jQ9oLoYh/U17Oah0HT8iRWGmLzGG1tdChrsYkqp6QZwZYiAboOu6ETG4fsoELt1XmVtr 1BfxJDAtLFm4YWl0XXXGr1UTe7BC+OtaWWE2TZS848dGyNBEpJJVbtb5y1iPw9XO8IAdC6Bj KT76F4JusE70JdHa8ZKj2ON5yYCkfeI+TfNDKm8gj9yjn5ZLlLvEMZGPhf44owVuBJw+ZzTw L/CGSpWMV4UCL580B29TPoH3Lkgy0gWnD2IGMynkUn6gePGPhZ5rIvp1nPUM4jVC4vf+W3oH yt3bZPWm32zrsWnOkE7DrL/3XhVdCNmVPgaWuRcd/KZIxoOJY3SI6S5/F/VQKQ8x/49vr6Rp hmVAxYEoHKi1SyvAVjbMRhLNeKwNauTWFpmZETAy37zgCN9CWtuhY9CH6YKkU4Pr709nKEtF alVJ61twJ1nE1z6xtjUVrGkxKRKfxW3nwOeeS2jZTk0ZZl7QALVvNTje2PSGOMmV3bfWRIWr +Ly2wXFb4AEQgg+Xs/aZOj2lwG6vGQHmfI0VEzNe4EBdELp+YlsCirwkv5ofJ1cdUSdnmOXh 1SMHBMVhejRuItpotPHsr+J8tWyGOxkE0sEQ2SCteSqNTPX93aIyJNbVLraZijUUW75of3wZ ehcw/zmHucAmVJG79h1H7pxlPps7Nrzvb5KiA9jGSyTPVisD7phJFiA3NVO6fIRluMI51PuV xvWqNdAOLiPNMf0K3IrJVIoPraZyPUZujjO9vBpck/00zB6oeicWkJIMhjS1CEEdOlpMJkoy PsKsdIN71DtkQIjN9uLg3wG92mIKXBcAawruotDXd3ugwsvjFpDfYbdGmn955TWM4dANUwjI zm1gqvehusDmhqeIiRrTXWdj/BAgZkuuQxRyA5QLluErdPJm/sr0UAD6j8wVAlUkk1K3u8b1 rKH7KGpyXFiJwtVufU=
  • Ironport-hdrordr: A9a23:eTZup6rEz7Hapd5UtUJ9dwIaV5uxL9V00zEX/kB9WHVpm5Oj+P xGzc526farslsssREb+OxpOMG7MBThHLpOkPMs1NCZLXTbUQqTXfpfBO7ZrQEIdBeOlNK1uZ 0QFpSWTeeAcWSS7vyKkTVQcexQueVvmZrA7Yy1rwYPPHFXguNbnn9E426gYzNLrWJ9dPwE/f Snl656T23KQwVpUi33PAhOY8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzyVxA0XXT9jyaortT GtqX202oyT99WAjjPM3W7a6Jpb3PPn19t4HcSJzuwYMC/lhAqEbJloH5eCoDc2iuey70tCqq iAnz4Qe+BIr1/BdGC8phXgnyHmzTYV8nfnjWSVhHPyyPaJDQ4SOo5kv8Z0YxHZ400vsJVXy6 RQxV+UsJJREFfpgDn9z8KgbWAqqmOE5V4Z1cIDhX1WVoUTLJVLq5YEwU9TGJAcWArn9YEcFv V0Bs203ocZTbqjVQGbgoBT+q3vYpxqdS32B3Tq+/blnAS+pUoJj3fxn6ck7zM9HJFUcegz2w 2LCNUuqFh0dL5lUUtKPpZ3fSKGMB2/ffvyChPmHb3GLtBOB5ufke+93F0KjNvaDKDgiqFC3q j8bA==
  • Ironport-sdr: KXs+zWnLr1v6cyWSLH38C6MP+35KnOoOsIauKe0PhUEbch8JBmj2f1Z9msdXO1gPJCp/xBFKh4 UsNJcynB0TLwWB8HUePs42h8/3aU3NKsJHxjtksLi7kaBno2gLAfEmucfYQkINUKw+NjYaete5 I3nF8uim/mLo+2Bk7Pwg9oEca1KHsP3DAtH1f6Iie8wWOs+2/3G7/josXT2B+29J+Rb4HSATyL JsB+GfLaKd3BrRP1pcXQcNy3isN3hbcZs3eAlGWmxJgSJZOesUNcz2UTTHZ2EejbMKS7EqjIQ2 ADbQ5Cg+dplXqbgCIoCINlj6
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Oct 28, 2021 at 05:55:25PM +0000, Oleksandr Andrushchenko wrote:
> 
> 
> On 28.10.21 19:03, Roger Pau Monné wrote:
> > On Thu, Oct 28, 2021 at 02:23:34PM +0000, Oleksandr Andrushchenko wrote:
> >>
> >> On 28.10.21 16:36, Roger Pau Monné wrote:
> >>> On Thu, Oct 28, 2021 at 12:09:23PM +0000, Oleksandr Andrushchenko wrote:
> >>>> Hi, Julien!
> >>>>
> >>>> On 27.10.21 20:35, Julien Grall wrote:
> >>>>> Hi Oleksandr,
> >>>>>
> >>>>> On 27/10/2021 09:25, Oleksandr Andrushchenko wrote:
> >>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
> >>>>>>
> >>>>>> While in vPCI MMIO trap handlers for the guest PCI host bridge it is 
> >>>>>> not
> >>>>>> enough for SBDF translation to simply call VPCI_ECAM_BDF(info->gpa) as
> >>>>>> the base address may not be aligned in the way that the translation
> >>>>>> always work. If not adjusted with respect to the base address it may 
> >>>>>> not be
> >>>>>> able to properly convert SBDF and crashes:
> >>>>>>
> >>>>>> (XEN) vpci_mmio_read 0000:65:1a.0 reg 8bc gpa e65d08bc
> >>>>> I can't find a printk() that may output this message. Where does this 
> >>>>> comes from?
> >>>> That was a debug print. I shouldn't have used that in the patch 
> >>>> description, but
> >>>> probably after "---" to better explain what's happening
> >>>>> Anyway, IIUC the guest physical address is 0xe65d08bc which, if I am 
> >>>>> not mistaken, doesn't belong to the range advertised for 
> >>>>> GUEST_VPCI_ECAM.
> >>>> This is from dom0 I am working on now.
> >>>>> IMHO, the stack trace should come from usptream Xen or need some 
> >>>>> information to explain how this was reproduced.
> >>>>>
> >>>>>> (XEN) Data Abort Trap. Syndrome=0x6
> >>>>>> (XEN) Walking Hypervisor VA 0x467a28bc on CPU0 via TTBR 
> >>>>>> 0x00000000481d5000
> >>>>> I can understnad that if we don't substract GUEST_VPCI_ECAM, we would 
> >>>>> (in theory) not get the correct BDF. But... I don't understand how this 
> >>>>> would result to a data abort in the hypervisor.
> >>>>>
> >>>>> In fact, I think the vPCI code should be resilient enough to not crash 
> >>>>> if we pass the wrong BDF.
> >>>> Well, there is no (?) easy way to validate SBDF. And this could be a 
> >>>> problem if we have a misbehaving
> >>>> guest which may force Xen to access the memory beyond that of PCI host 
> >>>> bridge
> >>> How could that be? The ECAM region exposed to the guest you should be
> >>> the same as the physical one for dom0?
> >> Ok, I have a Designware PCI hist which has 2 ECAM regions (I am starting to
> >> implement the driver for it, so I can be wrong here):
> >> - Root Complex ECAM area ("dbi"), it is something like 0x3000 bytes long
> >> - "Client" ECAM area ("config")
> >> So from Dom0 POV we have 2 ECAM regions and for the guest
> >> we always emulate a single big region:
> > You need support for multiple ECAM regions. That's how we do it on x86
> > PVH dom0. See register_vpci_mmcfg_handler and related machinery.
> Is it common for a PCI host bridge to have multiple ECAM regions?
> Currently on Arm we were about to support "pci-host-ecam-generic" [1],
> e.g. generic ECAM host bridge which normally (?) has a single ECAM
> region [2]. But the host bridge I want to support has multiple, so
> strictly speaking it is not the one that we implement.

It's possible on x86 to have multiple ECAM regions, whether that means
multiple host bridges, or host bridges having multiple ECAM regions is
unknown to me. It's all reported in the MCFG ACPI table (see PCI
Firmware document for the detailed description of MCFG) using the
"Configuration Space Base Address Allocation Structure", and there can
be multiple of those structures.

> Arm folks, do we want this generalization at this moment to align with x86
> with this respect?
> 
> We can live with the current approach and when I have my driver implemented
> I can send patches to make that generalization.
> >
> >> /*
> >>    * 256 MB is reserved for VPCI configuration space based on calculation
> >>    * 256 buses x 32 devices x 8 functions x 4 KB = 256 MB
> >>    */
> >> #define GUEST_VPCI_ECAM_BASE    xen_mk_ullong(0x10000000)
> >> #define GUEST_VPCI_ECAM_SIZE    xen_mk_ullong(0x10000000)
> >>
> >> So, we have the base address and size of the emulated ECAM space
> >> not connected to the real host bridge
> >>> And for domUs you really need to fix vpci_{read,write} to not
> >>> passthrough accesses not explicitly handled.
> >> Do you mean that we need to validate SBDFs there?
> >> This can be tricky if we have a use-case when a PCI device being
> >> passed through if not put at 0000:00:0.0, but requested to be, for
> >> example, 0000:0d:0.0. So, we need to go over the list of virtual
> >> devices and see if SBDF the guest is trying to access is a valid SBDF.
> >> Is this what you mean?
> > No, you need to prevent accesses to registers not explicitly handled
> > by vpci. Ie: do not forward unhandled accesses to
> > vpci_{read,wrie}_hw).
> I see, so those which have no handlers are not passed to the hardware.
> I need to see how to do that

Indeed. Without fixing that passthrough to domUs is completely unsafe,
as you allow domUs full access to registers not explicitly handled by
current vPCI code.

Regards, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.