[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v12 07/18] xen/pvh: Setup up shared_info.



On Thu, Jan 02, 2014 at 11:27:56AM +0000, David Vrabel wrote:
> On 01/01/14 04:35, Konrad Rzeszutek Wilk wrote:
> > From: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
> > 
> > For PVHVM the shared_info structure is provided via the same way
> > as for normal PV guests (see include/xen/interface/xen.h).
> > 
> > That is during bootup we get 'xen_start_info' via the %esi register
> > in startup_xen. Then later we extract the 'shared_info' from said
> > structure (in xen_setup_shared_info) and start using it.
> > 
> > The 'xen_setup_shared_info' is all setup to work with auto-xlat
> > guests, but there are two functions which it calls that are not:
> > xen_setup_mfn_list_list and xen_setup_vcpu_info_placement.
> > This patch modifies those to work in auto-xlat mode.
> [...]
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1147,8 +1147,9 @@ void xen_setup_vcpu_info_placement(void)
> >             xen_vcpu_setup(cpu);
> >  
> >     /* xen_vcpu_setup managed to place the vcpu_info within the
> > -      percpu area for all cpus, so make use of it */
> > -   if (have_vcpu_info_placement) {
> > +    * percpu area for all cpus, so make use of it. Note that for
> > +    * PVH we want to use native IRQ mechanism. */
> > +   if (have_vcpu_info_placement && !xen_pvh_domain()) {
> >             pv_irq_ops.save_fl = __PV_IS_CALLEE_SAVE(xen_save_fl_direct);
> >             pv_irq_ops.restore_fl = 
> > __PV_IS_CALLEE_SAVE(xen_restore_fl_direct);
> >             pv_irq_ops.irq_disable = 
> > __PV_IS_CALLEE_SAVE(xen_irq_disable_direct);
> 
> Should this be in a separate patch: "xen/pvh: use native irq ops"?

Good idea. Initially it was part of the event channel one, but I split
it.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.