[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC on deprivileged x86 hypervisor device models



>>> On 17.07.15 at 12:09, <Ben.Catterall@xxxxxxxxxx> wrote:
> Moving between privilege levels
> --------------------------------
> The general process is to determine if we need to run a device model (or 
> similar) and then, if so, switch into deprivileged mode. The operation 
> is performed by deprivileged code which calls into the hypervisor as and 
> when needed. After the operation completes, we return to the hypervisor.
> 
> If deprivileged mode needs to make any hypervisor requests, it can do 
> these using a syscall interface, possibly placing an operation code into 
> a register to indicate the operation. This would allow it to get data 
> to/from the hypervisor.

What I didn't understand from this as well as the individual models'
descriptions is in whose address space the device model is to be
run. Since you're hijacking the vCPU, it sounds like you're intending
Xen's address space to be re-used, just such that the code gets
run at CPL 3. Which would potentially even allow for read-only data
sharing (so that calls back into the hypervisor would be needed only
when data needs to be updated). But perhaps I guessed wrong?

If not, then method 2 would seem quite a bit less troublesome than
method 1, yet method 3 would (even if more involved in terms of
changes to be done) perhaps result in the most elegant result.

Again if not, whose runtime environment would the device model
use? It hardly would be qemu you intend to run that way, but
custom code would likely still require some runtime library code to
assist it. Do you mean to re-use hypervisor code for that (perhaps
again utilizing read-only [and executable] data sharing)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.