[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/4] x86/PVH: Dom0 building adjustments


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 1 Sep 2021 15:56:40 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=O/Rwm/zZbFnYdOqWPNkLOtEmXucxfXteYGvFJw99IeA=; b=Ucu3do1Mucy4Jv9goTV9dX6eAaZfJ+FtSC3Cy5TlS9xzpPsS/S6jsjCQHB6+TvNH38qRbYpO4V/mzGyru0uO1sseYuSSzBor1983TPb5y27KZ4NfeXrubc05zxwwUvqWngjQn8aTZLSP43i+5umcWhl//DP3bt+UPv95fX4BNxIsspD33JkPMhwwaZC8nqUtJ3Gu4oe1NQpxfBXfksMCQJZ9MHedLLpJ1/09C9szBgZ0piIhDp9pj56s/xFDBy6dGg0rDb/5WlxM5M1eRrpxK6Q3TS/zcYYG7egjWu0XvOkTrLFRs9vwDTNbO+88NhnRAAW2MQxG5CMbznpyZnvjaQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CRyJDE8gATnRhc7ntydSeH8JaS4KoPbAfz7HkINJF/pPALKw4nWmErY3cZdAkRv9mvWlGsfxdCLD6LUoiRGKpqK2OR+ezcW8tlFAZr5SgcU87MnbOlwgpBv5Csqkra9sckot5fnRvetaXhoP+cW+bmgowWwNWnL5vD2jy3Y0ajs4kKoG66gP6ZvmlJJsofigwyosvE7LTfb6syrjJJJklvs4NYvG2CEeX1JMLBzafzwQFaPyB7O8IwzUk914IySQd0ubkRHg5oFkNcwVk/WWUumOC9GTSEScCm4VYOBzAAm7a9FKsX8Tlrkbw3+l6JnefTufMjit9UZxsrX29gqHOA==
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Wed, 01 Sep 2021 13:57:00 +0000
  • Ironport-hdrordr: A9a23:Ggx2N6le8q/+24tuJdDpYqcf+p7pDfLp3DAbv31ZSRFFG/FxXq iV7Y0mPHjP+VAssRAb6Le90ca7LU80maQb3WAxB8bBYOCEghrKEGgB1+vfKlTbckWUnINgPM 9bAtBD4bbLY2SS4/ya3CCIV/MH5uDvytHZuQ6n9QYIcegQUdAE0++yYjzraXGfh2F9dOAE/O D33Ls6m9L6E05nEfhSwxE+LpX+TxmiruOZXfbAbyRXmTWzsQ==
  • Ironport-sdr: 6dvqWVnEwM6SEJ8MmxvtZ2WID4OZ2nsjPKjJUSq2pDMgyl0soQm0RP7D6OU0KX+XzKpSRo6njK B0+UH/rUMjfJX796/X1VbmCjgmntnd0V5+lnmeKRUPNT+Pvj1Wq8AJt11Do5M+BUYbUCXpIUUA /2R5+6vI8V5hrj2tqTy04G4Zg04njaBD/uuiqfgKSOduATk3X4rL2znPh3QsGw0LPQ2cNRKlSW LBqJOibNCd9doebXBAQuA/Agg8Y5YwTZ+sdl6z1/yTjjNG8cnwd5glk/059Y27XKwdvJmMFGEQ GM3vN6vOn5vpBDsUsVjflcpX
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Aug 31, 2021 at 10:53:59AM +0200, Jan Beulich wrote:
> On 30.08.2021 15:01, Jan Beulich wrote:
> > The code building PVH Dom0 made use of sequences of P2M changes
> > which are disallowed as of XSA-378. First of all population of the
> > first Mb of memory needs to be redone. Then, largely as a
> > workaround, checking introduced by XSA-378 needs to be slightly
> > relaxed.
> > 
> > Note that with these adjustments I get Dom0 to start booting on my
> > development system, but the Dom0 kernel then gets stuck. Since it
> > was the first time for me to try PVH Dom0 in this context (see
> > below for why I was hesitant), I cannot tell yet whether this is
> > due further fallout from the XSA, or some further unrelated
> > problem.

Iff you have some time could you check without the XSA applied? I have
to admit I haven't been testing staging, so it's possible some
breakage as slipped in (however osstest seemed fine with it).

> > Dom0's BSP is in VPF_blocked state while all APs are
> > still in VPF_down. The 'd' debug key, unhelpfully, doesn't produce
> > any output, so it's non-trivial to check whether (like PV likes to
> > do) Dom0 has panic()ed without leaving any (visible) output.

Not sure it would help much, but maybe you can post the Xen+Linux
output?

Do you have iommu debug/verbose enabled to catch iommu faults?

> Correction: I did mean '0' here, producing merely
> 
> (XEN) '0' pressed -> dumping Dom0's registers
> (XEN) *** Dumping Dom0 vcpu#0 state: ***
> (XEN) *** Dumping Dom0 vcpu#1 state: ***
> (XEN) *** Dumping Dom0 vcpu#2 state: ***
> (XEN) *** Dumping Dom0 vcpu#3 state: ***
> 
> 'd' output supports the "system is idle" that was also visible from
> 'q' output.

Can you dump the state of the VMCS and see where the IP points to in
Linux?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.