[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/pvh: fix population of the low 1MB for dom0


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 24 Jan 2022 17:07:47 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=sm1Zim0MmcGem+5CiO9CFZr33Oed1wzOntU1JvvKoAA=; b=ledngFUgTosoAQHOxyBc+dc8DHkXJEeQhROzpFZb28GaZMqag1hx1h7XhYAMVxgW6IRAGmDO9zyUztNPrEWoFGbB46K4lWOFL/lstkssRIajHZedXV1mvr+FoQFhPk768Cd2tu6I6nQeczTjqleuXbpqUFq2rJvKgO6sKcGKQOKjlF2XaN6NvHZeaf0TuAFO2PUJnzyV4H/4LiONAFCrGOxMcUuluevjX6NUbm9BJdiFVZBvnzO1fBzdDqjzBEc3DfSjpdTX7yEB5yaXukVeol6nC0HHaZNqfNHpOdBd0OYkSK6yw9jA4joPB6QeeTKHwiBP9RCjCWHV2VHhDKGFIQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JkGXndhdxoXOhD6TExGfyAdcpX16qzcT1QQv10wi9IszlAeoL39M81Xbt6NAPTBg+83BD6O1tcgWykMFE/n5j6cnu2U2IeKp0vw0NFEnfFt9KPzF6UdkjK4IWAiwVOKLEPipOL2IaNpvGGgnD5FRWNxY/oMlrmu5HTr6ryfskCHjHP6M6FFg3Qobaw8Qesjqng3E/opX3KgHEAHDjjTY6GpoYdr2OjaQrGAaerhUfpxLqxozkJ9ep1NdH9RpUbjzs4JXW0iz7LBEzLFACk4W/hHDLBIGdhXRXRCYWpCq4vFXoRemoJK3/jcDLxg8OnM3JyoQoJcXkfk0RrV6DN9jrw==
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 24 Jan 2022 16:08:05 +0000
  • Ironport-data: A9a23:8g91pKp1QyFu0EwhWiO9tuuTZD5eBmLKYhIvgKrLsJaIsI4StFCzt garIBmEO/2OZjOgfdF+YI21o0oBuMDdyN9qSgE6r3xhFiIQo5uZCYyVIHmrMnLJJKUvbq7GA +byyDXkBJppJpMJjk71atANlZT4vE2xbuKU5NTsY0idfic5Dndx4f5fs7Rh2NQw2ILlW1rlV e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9 DlCnYewFR0jb/HJpPQUUwhTHx9PGupX95aSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9 9RBdmpLNErawbvrnvTrEYGAhex6RCXvFJkYtXx6iynQEN4tQIzZQrWM7thdtNs1rp4UQKuOP 5FIAdZpRAzsXxseAmczNJIjsbu5vH3dbTNzg03A8MLb5ECMlVcsgdABKuH9atGMAMlYgEucj mbH5HjiRAEXMsSFzjiI+W7qgfXA9QvkXKoCGbv+8eRl6HWRzGEODBwdVXOgvOK0zEW5Xrpix 1c8o3R06/JorQryE4e7D0bQTGO4UgA0QNoMCedrtASx2qPU8g2VOFkjCS9OQYlz3CMpfgAC2 liMltLvIDVgtryJVH6QnoupQSOO1Ts9djFbO3JdJecRy5y6+dxo0EqTJjp2OPPt1rXI9SfML ydmRcTUr5EaloY12qqy5jgraBr898GSHmbZCug6N19JDz+Vhqb4PeRECnCBtJ6sybp1qXHa5 BA5dzC2trxmMH10vHXlrB8xNL+o/e2ZFzbXnERiGZIsnxz0pSL5LN4Lu2wvdRkwWirhRdMOS BSL0T69GbcJZCf6BUOJS9zZ5zsWIVjISo2+C6G8gitmaZltbg6XlByClmbLt10BZHMEyPllU b/CKJ7EJS9DVcxPkWTqL89Aj+5D7n1ulAv7GMGgpzz6gOX2WZJgYepfWLd4RrpnvPrsTcS82 4s3CvZmPD0GALShOXGGqNBKRb3IRFBiba3LRwVsXrfrCiJtGX07Cu+XxrUkeod/mL9SmPuO9 Xa4MnK0AnKm7ZEeAQnVOH1ldp31WpNz8SAyMSA2ZA760Hk/e4e/qqwYcsJvL7Ug8eViy99yT uUEJJrcUqgeFGyf9mRPd4T5oaxjaA+v2VCEMR26bWVtZJVnXQHIpIPpJ1O96CkUAyOrnsIiu Ln8hBjDSJ8OSl06XsbbYf6i1X2run0ZlL4gVkfEOIALKk7t7JJrO2r6ifpue5MALhDKxz270 QeKAEhH+bmR8tFtqNSQ3PKKtYakFed6D3F2JWiD4ObkLzTe80qi3ZREDLSCcwfCWT6m466lf +hUka3xaaVVgFZQvoNgOL931qZitcD3rrpXwwk4TnXGa1OnVuFpLnWchJQds6RMwvlSuBesW 1LJ8d5fYO3bNMTgGV8XBQwkcuXciq1ExmiMtaw4cBfg+St63LubSkEDbRCDhRtUIKZxLI54k /wqv9Qb6lDnhxcnWjpcYvu4K4hYwqQ8bpga
  • Ironport-hdrordr: A9a23:6yGFRaA8Q0MCklblHemo55DYdb4zR+YMi2TDsHoBLiC9E/bo8/ xG+c5x6faaslossR0b9uxoW5PhfZq/z/BICOAqVN/JMTUO01HIEKhSqafk3j38C2nf24dmpM JdmnFFeb7N5I5B/KTH3DU=
  • Ironport-sdr: qeS+KWBbNRmDCxIFr3+Ut1iZGcXAvJAYN8HS9zgUNMvMgRpWhmFsWib9ClRdNZWsYYR+fPATqV KUnmrxk5UgRy/VIpk3v+mJMIVqjE4ap1SlZ7j/+jb4rLdoy3tYVmRW36GHY7ZgQG+Nd3fTpECW ukiPevEZUx7nw1uJUrOKvQIxtV9YoM+HImWifll2DswUu8R51K1zcjQHeG5JkJfLskoYbGyUqT 3iuNtvgirNeNZuMT+s7mILAtvUp0jmyr/KcSC/IH1UOWwxgSDc/cpH6xfuhceen8LKKNSChBvm 0NMtBySVWsOXRJCwSuCcd0Rb
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Jan 24, 2022 at 05:01:04PM +0100, Jan Beulich wrote:
> On 24.01.2022 16:23, Roger Pau Monne wrote:
> > RMRRs are setup ahead of populating the p2m and hence the ASSERT when
> > populating the low 1MB needs to be relaxed when it finds an existing
> > entry: it's either RAM or a RMRR resulting from the IOMMU setup.
> > 
> > Rework the logic a bit and introduce a local mfn variable in order to
> > assert that if the gfn is populated and not RAM it is an identity map.
> > 
> > Fixes: 6b4f6a31ac ('x86/PVH: de-duplicate mappings for first Mb of Dom0 
> > memory')
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> 
> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> albeit ...
> 
> > --- a/xen/arch/x86/hvm/dom0_build.c
> > +++ b/xen/arch/x86/hvm/dom0_build.c
> > @@ -466,11 +466,17 @@ static int __init pvh_populate_p2m(struct domain *d)
> >      for ( i = rc = 0; i < MB1_PAGES; ++i )
> >      {
> >          p2m_type_t p2mt;
> > +        mfn_t mfn;
> >  
> > -        if ( mfn_eq(get_gfn_query(d, i, &p2mt), INVALID_MFN) )
> > +        mfn = get_gfn_query(d, i, &p2mt);
> 
> ... preferably with this becoming the initializer of the variable then
> and ...
> 
> > +        if ( mfn_eq(mfn, INVALID_MFN) )
> >              rc = set_mmio_p2m_entry(d, _gfn(i), _mfn(i), PAGE_ORDER_4K);
> > -        else
> > -            ASSERT(p2mt == p2m_ram_rw);
> > +        else if ( p2mt != p2m_ram_rw && !mfn_eq(mfn, _mfn(i)) )
> > +                /*
> > +                 * If the p2m entry is already set it must belong to a 
> > RMRR and
> > +                 * already be identity mapped, or be a RAM region.
> > +                 */
> > +                ASSERT_UNREACHABLE();
> 
> ... (not just preferably) indentation reduced by a level here. (I wonder
> why you didn't simply extend the existing ASSERT() - ASSERT_UNREACHABLE()
> is nice in certain cases, but the expression it logs is entirely unhelpful
> for disambiguating the reason without looking at the source.)

Indeed, that's better. Sorry about the indentation, not sure what my
editor has done here.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.