[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/9] x86/pvh: Set PVH guest's mode in XEN_DOMCTL_set_address_size



>>> On 20.06.15 at 05:09, <boris.ostrovsky@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/domain_build.c
> +++ b/xen/arch/x86/domain_build.c
> @@ -141,6 +141,13 @@ static struct vcpu *__init setup_dom0_vcpu(struct domain 
> *d,
>          if ( !d->is_pinned && !dom0_affinity_relaxed )
>              cpumask_copy(v->cpu_hard_affinity, &dom0_cpus);
>          cpumask_copy(v->cpu_soft_affinity, &dom0_cpus);
> +
> +        if ( is_pvh_vcpu(v) )
> +            if ( hvm_set_mode(v, is_pv_32bit_domain(d) ? 4 : 8) )

This should be just one if().

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2320,12 +2320,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
>      v->arch.hvm_vcpu.inject_trap.vector = -1;
>  
>      if ( is_pvh_domain(d) )
> -    {
> -        v->arch.hvm_vcpu.hcall_64bit = 1;    /* PVH 32bitfixme. */
> -        /* This is for hvm_long_mode_enabled(v). */
> -        v->arch.hvm_vcpu.guest_efer = EFER_LMA | EFER_LME;
>          return 0;
> -    }

With this removed, is there any guarantee that hvm_set_mode()
will be called for each vCPU?

Anyway, while I'll apply the previous patch as a cleanup one, I'll
defer this and later ones until a decision between pursuing PVH
vs going the "HVMlite" route was made.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.