[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Mapping HVM guest memory from Dom0



Thanks very much to both of you for the info! After Andrew's post today I was able to write some code myself to walk the page tables and get it to work correctly, and xc_translate_foreign_address is exactly what I want as well. I had been digging through the Xen codebase for a while looking for just such a function but hadn't found it … the docs situation for these APIs could definitely be better! I will definitely update the wiki with all this information once I'm sure I understand it properly.

On Thu, Oct 11, 2018 at 3:05 PM Tamas K Lengyel <tamas.k.lengyel@xxxxxxxxx> wrote:
On Wed, Oct 10, 2018 at 5:10 PM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>
> On 10/10/18 23:08, Spencer Michaels wrote:
> > Interesting … sorry, I had read the docs a while ago and my
> > interpretation at the time was that it didn't. I can try to get libvmi
> > working, but nonetheless I do want to figure out how to this with the
> > Xen API itself if at all possible, so I'd appreciate any help in doing
> > so.
> >
> > I've looked through libvmi's implementation some more; it looks like
> > it does this via vmi_pagetable_lookup_cache, which in my case ends up
> > calling v2p_pae (intel.c:327). As I understand it, in that function,
> > the `dtb` parameter holds the pagetable base address, and you use that
> > to walk the page table and get the physical address corresponding to
> > the virtual addr. Based on the end of vmi_translate_kv2p
> > (accessors.c:703), it looks like the value of dtb is `vmi->kpgd`,
> > which at some point earlier is set to the value of the `cr3` register.
> > Is my understanding, roughly speaking, correct?

Correct, this works because the kernel is mapped into all process'
address space on both Windows and Linux. So you can just take the
value of CR3 at any given time and you will find the kernel memory
range mapped in. The recent KPTI mitigations might effect this though,
haven't investigated it yet.

> > I tried to replicate a simpler version of libvmi's page table walking
> > code just now, and in doing so noticed that the value of CR0, as
> > reported by Xen, has its 31st bit set to zero … i.e. virtual
> > addressing is disabled entirely? (For reference, I'm using a 64-bit
> > Ubuntu image set up initially by virt-manager but now just run via `xl
> > create`. I don't know if I should expect it to have paging enabled or
> > not.) On the other hand, if I understand correctly, what I mentioned
> > earlier with regards to some HVM guests assuming (addr >>
> > XC_PAGE_SHIFT) = MFN *is* assuming that guest page frame number =
> > machine frame number, which sounds like it is what would be applicable
> > in this case.
> >
> > Honestly, I'm not so sure what's happening here, and I'm not even sure
> > my description is sensible at this point — I need to look into
> > libvmi's implementation more and also figure out exactly what paging
> > mode (or lack thereof) my HVM guest is running in. If, on the other
> > hand, any of the above sounds familiar, suggestions/hints would be
> > very helpful.

If you want to look at another very compact implementation then libxc
has one too: https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=tools/libxc/xc_pagetab.c;h=db25c20247573a3c638d7725c976433221a40141;hb=HEAD#l29

> Here are some observations/points which may help.
>
> x86 is a bit more complicated than other architectures, and as a result,
> there is a lot of subtly incorrect terminology (including in Libvmi -
> sorry Tamas).

Well aware of it =)

>
> A virtual address (also called an effective address) is a segment:offset
> pair.  The segment is almost always implicit (%cs for instruction
> fetches, %ss for stack accesses, %ds for normal data accesses).  The
> segmentation part of address translation adds the segment base to the
> offset to produce a linear address.
>
> A "flat memory model" (used by almost all 32bit OSes and is
> unconditional in AMD64) is one where the segment base is 0, at which
> point offset == linear address.  This is where most confusion over the
> term "virtual address" arises.
>
> The paging part of address translation takes a linear address, follows
> the pagetable structure (rooted in %cr3), to produce a physical address
> as an output.  This may be guest physical or host physical depending on
> the VM configuration.
>
>
> Next, please read
> http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/xen/mm.h;h=054d02e6c0e68b411afba42424dc5fe7e7d69855;hb=refs/heads/staging#l8
> which describes the terminology Xen uses for various address spaces.  In
> particular, the difference between MFN and GFN.
>
> The relevant difference PV and HVM guests as far as you are concerned is
> that for PV guests, GFN == MFN because the pagetables written by the
> guest are walked directly by hardware.
>
> An HVM guest has GFN != MFN, because the guest physical to host physical
> translation is provided by HAP/EPT/NPT/SLAT (whichever term you chose to
> use for hardware acceleration), or by the shadow pagetables (maintained
> and operated by Xen, for hardware lacking HAP support).
>
> The foreign map API uses a GFN, even if the underlying API describes the
> parameter name as MFN.  This is a consequence of PV guests having been
> developed long before hardware virt extensions came along, and noone
> having gone through and retroactively updated the terminology.
>
> Therefore, for both PV and HVM guests, you can take the guest cr3,
> extract the frame part of it, and ask the foreign map API to map that
> guest frame.  The underlying implementation in Xen is trivial for a PV
> guest (GFN == MFN), but slightly more complicated for HVM guests (GFN
> has to be translated into an MFN by Xen walking the HAP/shadow
> pagetables before dom0's mapping request can be completed).
>
> I hope this clears up some of the confusion.
>

Thanks for the in-depth write-up! We might even want to record this on
a wiki page ;)

Tamas
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.