[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Questions about PVH memory layout


  • To: <joshua_peter@xxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 29 Jun 2020 10:57:58 +0200
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 29 Jun 2020 08:58:24 +0000
  • Ironport-sdr: 3DotGuMn/L7lm8Q1knDxCNoI48VWkEGWZcfryKJjpGWnUTCxIbLWKGK+WAnlmsiZCL7xXE6QwF xBAfYX1ZgRy1LrF/uvKYfQPtodcKw0PagJkmg2W/ZxDkAXpFnhW5HUeVJ4zGXq45Q5X5PsPWbP vTSyrGj9VQQ9DD4hTgtv54zUN+UYZ7J9TrF6XFdrZT1bPHe4Cmvilw5l2qIVxvj5h+Tdy6Hn1v 8qoR+fCD0NAQoBaiRUKeS73t7bu4/hcp5zW2TMf2xpou5gwbWZwnW7c+tMbrwVitSsDb//tATA Y70=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Sun, Jun 28, 2020 at 08:58:14AM +0200, joshua_peter@xxxxxx wrote:
> Hello everyone,
> 
> I hope this is the right forum for these kinds of questions (the website
> states no "technical support queries"; I'm not sure if this qualifies).
> If not, sorry for the disturbance; just simply direct me elsewhere then.
> 
> Anyway, I'm currently trying to get into how Xen works in detail, so
> for one I've been reading a lot of code, but also I dumped the P2M table
> of my PVH guest to get a feel for how things are layed out in memory. I
> mean there is the usual stuff, such as lots of RAM, and the APIC is
> mapped at 0xFEE00000 and the APCI tables at 0xFC000000 onwards. But two
> things stuck out to me, which for the life of me I couldn't figure out
> from just reading the code. The first one is, there are a few pages at
> the end of the 32bit address space (from 0xFEFF8000 to 0xFEFFF000),
> which according to the E820 is declared simply as "reserved".

Those are the HVM special pages, which are used for various Xen
specific things, like the shared memory ring for the PV console.
They are setup in tools/libxc/xc_dom_x86.c (see SPECIALPAGE_*).

> The other
> thing is, the first 512 pages at the beginning of the address space are
> mapped linearly, which usually leads to them being mapped as a single
> 2MB pages. But there is this one page at 0x00001000 that sticks out
> completely. By that I mean (to make things more concrete), in my PVH
> guest the page at 0x00000000 maps to 0x13C200000, 0x00002000 maps to
> 0x13C202000, 0x00003000 maps to 0x13C203000, etc. But 0x00001000 maps
> to 0xB8DBD000, which seems very odd to me (at least from simply looking
> at the addresses).

Are you booting some OS on the guest before dumping the memory map?
Keep in mind guest have the ability to modify the physmap, either by
mapping Xen shared areas (like the shared info page) or just by using
ballooning in order to poke holes into it (which can be populated
later). It's either that or some kind of bug/missoptimization in
meminit_hvm (also part of tools/libxc/xc_dom_x86.c).

Can you check if this 'weird' mapping at 0x1000 is also present if you
boot the guest with 'xl create -p foo.cfg'? (that will prevent the
guests from running, so that you can get the memory map before the
guest has modified it in any way).

> My initial guess was that this is some bootloader
> related stuff, but Google didn't show up any info related to that
> memory area, and most of the x86/PC boot stuff seems to happen below
> the 0x1000 mark.

If you are booting with pygrub there's no bootloader at all, so
whatever is happening is either done by the toolstack or the OS you are
loading.

Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.