[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: PV - different behavior of pgd_offset in xen 4.6 and 4.13 for GUEST ACCESSIBLE memory area


  • To: Charles Gonçalves <charles.fg@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 8 Oct 2021 11:30:24 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NAows11ib9/2DBlqxbzf3C0a75Pd5fKIXtdRIXUTJAk=; b=bdjKizALnoq1rodGxZYXliKDky0z0xoH4Ownw/yFuK1KdmgkLlgVHjrwRE9jSUNdz6Glfr0u4ICfKFJnxYqu84AgqMZYKJRl7PlAWHhzWQvRaWnqFsNK+fb9AdogPz3SKDEPOt0zC/pDxBsXZPcdjj3POYP7i6m0hxrBvTKxdUFwCcbWOqqAp0syHrmc1XWACsBr7B6X3p2PMm5hPcsYeljE0r3Y4cFRDUnLUrnsCnEjbmRGGmKuDDMrSvjS33WOul3nTjhxdm82qP2BCZSywdO20W7sgDtbxt7LlQYijfqvWN4DhZhYd4Y1Ka07iOX+thWoIwQtrFkyXs+eN0a1yQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bfZlYRtP+ksecha/x1nAORubKtbYu8H2MpdnOmD53o2i4D7PHXsI2cWV77YsaPYxJVNTTjXbhWX47XrQkbNpIRKuPKn0WAkI6NNlzq2OUSbespqG38bj/fgYcgUykzLgncjAkV8zZHr1fqx+TSemLKVNdMMFShlVUOxcu3rAVOhPXC8cuOzuJCxBwxkpajPeAGZNNLO5uXiqk2jwEbfb/xsFNKuQfC/PFg2Arqg7t86Au17kIDsLIkc3smtqUWN/T7mHLLqPiPkUuuKsvXn8im1jkOFO/y3CUsFyhx50GsnBxEpZ+0QJh9/d8aLnlTOWAjrt2KFM6kh1NDcAgcmYSg==
  • Authentication-results: lists.xen.org; dkim=none (message not signed) header.d=none;lists.xen.org; dmarc=none action=none header.from=suse.com;
  • Cc: xen-devel@xxxxxxxxxxxxx
  • Delivery-date: Fri, 08 Oct 2021 09:30:40 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 07.10.2021 17:10, Charles Gonçalves wrote:
> During some experiments in my PhD I've tried to reused a code from
> Jann Horn (https://bugs.chromium.org/p/project-zero/issues/detail?id=1184
> ) that used the mapping in
> 
> ```
> 0xffff804000000000 - 0xffff807fffffffff [256GB, 2^38 bytes, PML4:256]
> Reserved for future shared info with the guest OS (GUEST ACCESSIBLE)
> ```
> to map some temporary page table data to be able to attack the system.
> 
> This used to work on Xen 4.6:
> 
> ```
> #define MY_SECOND_AREA 0xffff804000000000ULL
> printk("PML4 entry: 0x%lx\n", (unsigned
> long)pgd_val(*pgd_offset(current->mm, MY_SECOND_AREA)));
> ```
> 
> In xen 4.6 :
> 
> `[ 3632.620105] PML4 entry: 0x183d125067 `
> Returns a valid PGD ( pgd_present(*pdg) == true )
> 
> but has different behavior in Xen 4.13 (despite no change in the
> asm-x86/config.h .
> 
> In xen 4.13:
> 
> `[70386.796119] PML4 entry: 0x800000021a445025`
> Return a bad PGD ( pgd_bad(*pdg) == true )

There's nothing really bad in this entry afaics. The entry is r/o
and nx, yes, but that ought to be fine (i.e. I think pgd_bad() is
too rigid here, but may not be valid to be used on hypervisor
controlled entries in the first place).

> I could not find any reference on the branch RELEASE-4.13.0 of why
> this happens this way.
> Any hint of what is happening here?
> Has Xen changed the way it handles memory from regions in range
> 0xffff804000000000 - 0xffff807fffffffff  across those versions?

Yes - see a5a5d1ee949d ("x86/mm: Further restrict permissions on some
virtual mappings"). The page table arrangement underlying this VA
range isn't part of the ABI, i.e. we're free to change it at any time.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.