[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Implementing mem-access for PV guests



> > At 01:02 +0000 on 25 Apr (1366851740), Aravindh Puthiyaparambil
> > (aravindp)
> > wrote:
> > > > > I'm finally to a point where I can start looking at this more closely.
> > > > > I'm trying to wrap my head around the shadow code to figure out
> > > > > the right course of action.
> > > > >
> > > > > I'd want HVMOP_set_mem_access to work with both shadow and
> EPT,
> > so
> > >
> > > Getting this to work with shadow would allow non-NPT HVM guests to
> > > have mem_access support or will this also extend to PV guests?
> >
> > It could extend to PV guests as long as you're willing to run them on
> > shadow pagetables (i.e. with a significant performance hit).
> 
> Given that  64-bit PV already takes a performance hit from syscalls, I would
> rather not taken the added hit with shadow PTs.
> 
> > > I am interested in getting this to work for PV guests so I was
> > > wondering how much extra work that would be. I can definitely help
> > > out with this effort.
> >
> > I think you'd have to take a good look at the hypercall interface --
> > PV guests have more ways of causing the hypervisor to read and write
> > memory for them (e.g. the MMU ops) which wouldn't be intercepted by
> shadow PTs.
> > It certainly ought to be possible, though.
> 
> OK, I will take a look at the hypercall interface. Does any shadowing come in
> to the picture with a PV guest when using a modern pv_ops?

Tim,

As I am looking at code, I see most of the mem_event / mem_access naming is HVM 
specific. Given that I am enabling it for PV, shouldn't the names be changed to 
something more generic? 

On the tools side, I was thinking of renaming:
HVMMEM_access* to XENMEM_*
HVMOP_*_access to XENMEM_*_access

Create xc_domain_*_access() or xc_*_access() and make them wrappers that call 
xc_hvm_*_access() or vice-versa. Then move the functions to xc_domain.c or 
xc_mem_access.c. This way I am hoping the exisiting APIs will still work.

Something similar would have to be done in the hypervisor side as most of the 
mem_access hypercalls falls under HVM ops and do_hvm_op(). What should I do 
there? Fold everything in to memory ops? Please advise.

Thanks,
Aravindh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.