WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Question: VMM pro

> I believe i just understood what x86_32/seg_fixup.c
> is good for, and why this works ...wu-hu.

It's a bit esoteric ;-)  Due to a bit interaction of a fairly weird 
implementation detail of glibc and the mechanism Xen uses to protect itself 
from guests.

Amusingly, 32-bit PV guests running on a 64-bit hypervisor do *not* need the 
seg fixup to ensure protection: Xen lives in the top of the 64-bit address 
space, outside the region a 32-bit guest is capable of accessing anyhow.  So 
actually TLS storage can, AFAIK, work directly using negative segment 
offsets.

Caution: wrap wet towel around head before thinking about segmentation.

> What I do not really understand yet, how is ring compression solved on
> x86_64 pv? The limitations of page protection did not change with long
> mode, right?

Right.  Segmentation as a protection scheme does not (in general) work for x86 
64-bit PV because the limit registers are not respected.  i.e. they can be 
used to provide an offset to addresses, but not to limit what memory can be 
addressed.

Instead, Xen has to get by just with the single bit of privilege level 
information the pagetables can hold.  Xen is permanently mapped at the top of 
the address space, and the rest of the address space *either* has just guest 
userspace mapped in (if running in guest usermode), *or* it has guest kernel 
+ guest userspace (when running in guest kernel mode).

To transfer from guest usermode to guest kernel mode, you have to bounce 
through Xen so that it can add the guest kernel mappings.  To transfer back, 
the guest kernel makes a hypercall which flushes the kernel mappings out 
again for security reasons.  I think Xen might set the "global bit" on the 
userspace mapping so that they don't get flushed too during this process (can 
still be cleared using a "global flush").

Throughout all this, Xen is always mapped and is protected by the supervisor 
bit in the pagetables.

This adds up to a bit of extra overhead, which is not needed for 32-bit guests 
(either on 32-bit or 64-bit hosts, as far as I know).

Cheers,
Mark

-- 
Dave: Just a question. What use is a unicyle with no seat?  And no pedals!
Mark: To answer a question with a question: What use is a skateboard?
Dave: Skateboards have wheels.
Mark: My wheel has a wheel!

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>