[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers


  • To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 28 Oct 2021 15:36:11 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=R3S4DJB+Ifu4PAobrPqPQmYAw2JoqJ0/kKP21fu4zIg=; b=V+Lcxn+uI2dwe5Oeb6CQ+TsLPW3OjUXlk5hgf+m+79jWwCApBaHfFnMH262SsruxWHIPY5gS1IVeAnF7Wis81J8zOTsk0B1Ao3PfpCR8B/77CMgOjHjcUhjE5amhCIjaSYV6FGELDux9IEjZTqHZQgdg/tHOUOqxlpSYZkd/R+/oxQln7Ah0mXOTBM8YCdHWpBngoFE/hXbk+9zRHgeQ7/2dpGFy4q65jDzLcuv48uJPjc0xL20VmkBxUXome0mK+TGKZYeaSQL4QSmZVw1+O5DN34uL8q325Uvrzw/AO7ms9MHUeuERcjlYeFCNCCYLCyCNdHY+YpUFIBsILh3n2w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OuPT3RMMQkpvlMwVAzIDTcESTopgxbEJIU+0FAME4M6fcxY80rJRMiBrjJANyI96PNY2SYslXpoYlgR4TvQotipBF1HwSyuFESBrIp2OlLUu1N/xyO7RbRukqMpDHklOEKgBW2pHw1WkxRGUAUjNDym0aodxsm0E2Xg/3HugfsyI82bJMPdpiHm3AaXaIj9kX5mMIrom3uCZog4fVFrUbHaQmtmNuqeq7VXjn4GsEQuUhXQXA0+G6e5WWQ7vPey6KpOYATr9umxtcEOg2Ag9GCBftgTTuObZoVnCsjeciH2/HJuKl/D6jZBgn0sfsICjqw1knPSc4v53Ij/QOg9NqA==
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Julien Grall <julien@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, "iwj@xxxxxxxxxxxxxx" <iwj@xxxxxxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>
  • Delivery-date: Thu, 28 Oct 2021 13:36:32 +0000
  • Ironport-data: A9a23:7/OlXKgxw906avZmueuFqoo+X161KBYKZh0ujC45NGQN5FlHY01je htvD26Ab/reZ2H0eYp2aI638hgEu5HdytAxHVRsrCgzFHwb9cadCdqndUqhZCn6wu8v7a5EA 2fyTvGacajYm1eF/k/F3oAMKRCQ7InQLlbGILes1htZGEk0F0/NtTo5w7Rg29Yy0YDja++wk YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx /1Uh5m3eCYTLpeRgd4wTSZhCSZuN5ZZreqvzXiX6aR/zmXDenrohf5vEFs3LcsT/eMf7WNmr KJCbmpXN1ba2rzwkOnTpupE36zPKOHxO4wSoDd4xCzxBvc6W5HTBa7N4Le02R9t2J0TRaqCN 6L1bxJufDHRMiFFOWwNJ68Xkb/0qCPFYQxx/Qf9Sa0fvDGIkV0ZPKLWGNvKePSaSMNNhEGaq 2nauWPjDXkyPtGF1SCM9H7qg+bVhD76Q6obDrj+/flv6HWJz2wODFsNVF20odGwkEv4UNVaQ 2QP4TYnp6U28E2tT/H+Uge+rXrCuQQTM/J3F+A58wiLxrDjygCVHHUfTjVBZdols+c7XTUvk FSOmrvBByFrsbCTYWKQ8PGTtzzaETgYKyoOaDEJSSMB4sL/u8cjgxTXVNFhHaWpyNrvFlnNL yui9XZkwe9J1IhSivv9rQuvby+QSobhVzww4TT2ZHKfsxpDSZCaXpav+VLg8qMVRGqGdWWps H8BksmYyekBC5CRiSCAKNkw8KGVC+Wta2KE3wY+d3U133H0oST7JNEPiN1rDB4xap5sRNP/X KPEVeq9DrdoN3y2cbQ/XYu1D8k7pUQLPYW4Dq6KBjaij54YSeNmwM2MTRLPt4wOuBJ1+U3aB Xt9WZ31ZZr9If8+pAdav89HjdcWKtkWnAs/v6zTwRW9yqa5b3WIU7oDO1bmRrlnt/7f/l2Pq IgOZ5fiJ/BjvAvWOXC/HWk7dgliEJTGLcqu95w/mhCre1IO9J4d5w/5nup6Jt0Nc1V9nebU5 HCtMnK0O3Kk7UAr3T6iMyg5AJu2BM4XhStiYUQEYAb5s1B+MN3HxPpOKPMKkUwPqbULIQhcF KJeJa1tw51nF1z6xtjqRcKs8dI4LE7y2VLm0ujMSGFXQqOMjjfho7fMVgDu6DMPHmyws84/q KenzQTVXdwIQAEKMSocQKnHI4qZsSdPleRscVHPJ9UPKkzg/JIzc376j+MtItFKIhLGn2PI2 wGTCBYehO/Mv45qr4WZ2fHa99+kQ7lkA05XP2jH9rLqZyPUyXWunN1bW+GScDGDCG6toPe+Z f9Yxu3XOeEcmAoYqJJ1FrtmlPps59bmq7JA4B5jGXHHMwaiBr96eyHU1shTrKxdgLRevFLuC E6I/9BbP5SPOd/kTwFNdFZ0MLzb2KhNyDfI7PkzLEHr3wNN/eKKARdIIh2BqC1BN78pYokr9 vgs5ZwN4Aulhxt0btvf1nJI936BJ2ArWrk8ss1IG5fijwcmxw0QYZHYDSOqspiDZ88VbxsvK z6QwqHDm65d1gzJdH9qTSrB2u9UhJIvvhFWzQBdewTVy4Sd3vJnjgdM9TkXTxhOykQV2u1+D WFnKklpKPjc5DxvnsVCAzihFgwp6Md1IaAtJ4/lTFHkcnQ=
  • Ironport-hdrordr: A9a23:qODiP6G+WE5yS/cSpLqFcpHXdLJyesId70hD6qkvc3Nom52j+/ xGws536faVslcssHFJo6HmBEClewKnyXcT2/htAV7CZnichILMFu9fBOTZsl/d8kHFh4tgPO JbAtRD4b7LfClHZKTBkXCF+r8bqbHtmsDY5pav854ud3ATV0gJ1XYGNu/xKDwReOApP+tcKH LKjfA32AZINE5nJPiTNz0gZazuttfLnJXpbVovAAMm0hCHiXeN5KThGxaV8x8CW3cXqI1SvV Ttokjc3OGOovu7whjT2yv66IlXosLozp9mCNaXgsYYBz3wgkKDZZhnWZeFoDcpydvfpWoCoZ 3pmVMNLs5z43TeciWcpgbs4RDp1HIU53rr2Taj8DLeiP28YAh/J9tKhIpffBecwVEnpstA3K VC2H/cn4ZLDDvb9R6NpuTgZlVPrA6ZsHAimekcgzh0So0FcoJcqoQZ4Qd8DIoAJiTn84oqed MeQv003MwmMm9yUkqp/FWGmLeXLzEO91a9Mwc/U/WuonhrdCsT9Tpd+CQd9k1wgq7VBaM0oN gsCZ4Y5o2mePVmGp6VNN1xMvdfNVa9NC4kEFjiaWgPR5t3cE4klfbMkcEIDaeRCdo18Kc=
  • Ironport-sdr: lfJkCEYuR5n4DC+DZ3uthob4ZYM1fuR5pgNcx0P8FOeqx711yntbwXsUOhfdGdY117q/g2h96K Nn9ygs8bLPJX/vKCpC5Gw3AJ0gyrlrIlGuRLtck8ZLbdK6GuJr6j5ucABVGO0ekSLnzNDLAREU WBBkppDI3zzo7tWl/Jh56s0j6MGOXbmWyt06crbv3ZV7YGER8x4Ou+oYWh6SaO321AKp//DIFu zVAgS7wUZeg7EMORZmp/SqQd8a7sMeFh4LC4owWu8W03+A3oAwokAnvvNOzQfkF9dbDTmok5LD 08Ee0LWYDuoHKsiAyaeIBbl2
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Oct 28, 2021 at 12:09:23PM +0000, Oleksandr Andrushchenko wrote:
> Hi, Julien!
> 
> On 27.10.21 20:35, Julien Grall wrote:
> > Hi Oleksandr,
> >
> > On 27/10/2021 09:25, Oleksandr Andrushchenko wrote:
> >> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
> >>
> >> While in vPCI MMIO trap handlers for the guest PCI host bridge it is not
> >> enough for SBDF translation to simply call VPCI_ECAM_BDF(info->gpa) as
> >> the base address may not be aligned in the way that the translation
> >> always work. If not adjusted with respect to the base address it may not be
> >> able to properly convert SBDF and crashes:
> >>
> >> (XEN) vpci_mmio_read 0000:65:1a.0 reg 8bc gpa e65d08bc
> >
> > I can't find a printk() that may output this message. Where does this comes 
> > from?
> That was a debug print. I shouldn't have used that in the patch description, 
> but
> probably after "---" to better explain what's happening
> >
> > Anyway, IIUC the guest physical address is 0xe65d08bc which, if I am not 
> > mistaken, doesn't belong to the range advertised for GUEST_VPCI_ECAM.
> This is from dom0 I am working on now.
> >
> > IMHO, the stack trace should come from usptream Xen or need some 
> > information to explain how this was reproduced.
> >
> >> (XEN) Data Abort Trap. Syndrome=0x6
> >> (XEN) Walking Hypervisor VA 0x467a28bc on CPU0 via TTBR 0x00000000481d5000
> > I can understnad that if we don't substract GUEST_VPCI_ECAM, we would (in 
> > theory) not get the correct BDF. But... I don't understand how this would 
> > result to a data abort in the hypervisor.
> >
> > In fact, I think the vPCI code should be resilient enough to not crash if 
> > we pass the wrong BDF.
> Well, there is no (?) easy way to validate SBDF. And this could be a problem 
> if we have a misbehaving
> guest which may force Xen to access the memory beyond that of PCI host bridge

How could that be? The ECAM region exposed to the guest you should be
the same as the physical one for dom0?

And for domUs you really need to fix vpci_{read,write} to not
passthrough accesses not explicitly handled.

> > When there is a data abort in Xen, you should get a stack trace from where 
> > this comes from. Can you paste it here?
> (XEN) Data Abort Trap. Syndrome=0x6
> (XEN) Walking Hypervisor VA 0x467a28bc on CPU0 via TTBR 0x00000000481d5000
> (XEN) 0TH[0x0] = 0x00000000481d4f7f
> (XEN) 1ST[0x1] = 0x00000000481d2f7f
> (XEN) 2ND[0x33] = 0x0000000000000000
> (XEN) CPU0: Unexpected Trap: Data Abort
> (XEN) ----[ Xen-4.16-unstable  arm64  debug=y  Not tainted ]----
> (XEN) CPU:    0
> (XEN) PC:     000000000026d3d4 pci_generic_config_read+0x88/0x9c
> (XEN) LR:     000000000026d36c
> (XEN) SP:     000080007ff97c00
> (XEN) CPSR:   0000000060400249 MODE:64-bit EL2h (Hypervisor, handler)
> (XEN)      X0: 00000000467a28bc  X1: 00000000065d08bc  X2: 00000000000008bc
> (XEN)      X3: 000000000000000c  X4: 000080007ffc6fd0  X5: 0000000000000000
> (XEN)      X6: 0000000000000014  X7: ffff800011a58000  X8: ffff0000225a0380
> (XEN)      X9: 0000000000000000 X10: 0101010101010101 X11: 0000000000000028
> (XEN)     X12: 0101010101010101 X13: 0000000000000020 X14: ffffffffffffffff
> (XEN)     X15: 0000000000000001 X16: ffff800010da6708 X17: 0000000000000020
> (XEN)     X18: 0000000000000002 X19: 0000000000000004 X20: 000080007ff97c5c
> (XEN)     X21: 00000000000008bc X22: 00000000000008bc X23: 0000000000000004
> (XEN)     X24: 0000000000000000 X25: 00000000000008bc X26: 00000000000065d0
> (XEN)     X27: 000080007ffb9010 X28: 0000000000000000  FP: 000080007ff97c00
> (XEN)
> (XEN)   VTCR_EL2: 00000000800a3558
> (XEN)  VTTBR_EL2: 00010000bffba000
> (XEN)
> (XEN)  SCTLR_EL2: 0000000030cd183d
> (XEN)    HCR_EL2: 00000000807c663f
> (XEN)  TTBR0_EL2: 00000000481d5000
> (XEN)
> (XEN)    ESR_EL2: 0000000096000006
> (XEN)  HPFAR_EL2: 0000000000e65d00
> (XEN)    FAR_EL2: 00000000467a28bc
> (XEN)
> [snip]
> (XEN) Xen call trace:
> (XEN)    [<000000000026d3d4>] pci_generic_config_read+0x88/0x9c (PC)
> (XEN)    [<000000000026d36c>] pci_generic_config_read+0x20/0x9c (LR)
> (XEN)    [<000000000026d2c8>] pci-access.c#pci_config_read+0x60/0x84
> (XEN)    [<000000000026d4a8>] pci_conf_read32+0x10/0x18
> (XEN)    [<000000000024dcf8>] vpci.c#vpci_read_hw+0x48/0xb8
> (XEN)    [<000000000024e3e4>] vpci_read+0xac/0x24c
> (XEN)    [<000000000024e934>] vpci_ecam_read+0x78/0xa8
> (XEN)    [<000000000026e368>] vpci.c#vpci_mmio_read+0x44/0x7c
> (XEN)    [<0000000000275054>] try_handle_mmio+0x1ec/0x264
> (XEN)    [<000000000027ea50>] traps.c#do_trap_stage2_abort_guest+0x18c/0x2d8
> (XEN)    [<000000000027f088>] do_trap_guest_sync+0xf0/0x618
> (XEN)    [<0000000000269c58>] entry.o#guest_sync_slowpath+0xa4/0xd4
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) CPU0: Unexpected Trap: Data Abort
> (XEN) ****************************************

Are you exposing an ECAM region to the guest bigger than the
underlying one, and that's why you get crashes? (because you get out of
the hardware range)

I would assume physical accesses to the ECAM area reported by the
hardware don't trigger traps?

Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.