[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/hvm: Drop more remains of the PVHv1 implementation



> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> Sent: 19 July 2017 14:28
> To: Xen-devel <xen-devel@xxxxxxxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap
> <George.Dunlap@xxxxxxxxxx>; Jan Beulich <JBeulich@xxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; Paul Durrant <Paul.Durrant@xxxxxxxxxx>; Roger Pau
> Monne <roger.pau@xxxxxxxxxx>
> Subject: [PATCH] x86/hvm: Drop more remains of the PVHv1 implementation
> 
> These functions don't need is_hvm_{vcpu,domain}() predicates.
> 
> hvmop_set_evtchn_upcall_vector() does need the predicate to prevent a
> PV
> caller accessing the hvm union, but swap the copy_from_guest() and
> is_hvm_domain() predicate to avoid reading the hypercall parameter if we
> not
> going to use it.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

> ---
> CC: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
> CC: Jan Beulich <JBeulich@xxxxxxxx>
> CC: Wei Liu <wei.liu2@xxxxxxxxxx>
> CC: Paul Durrant <paul.durrant@xxxxxxxxxx>
> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
>  xen/arch/x86/hvm/hvm.c | 15 ++++++---------
>  1 file changed, 6 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 8145385..4fef616 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -506,8 +506,7 @@ void hvm_do_resume(struct vcpu *v)
>  {
>      check_wakeup_from_wait();
> 
> -    if ( is_hvm_domain(v->domain) )
> -        pt_restore_timer(v);
> +    pt_restore_timer(v);
> 
>      if ( !handle_hvm_io_completion(v) )
>          return;
> @@ -1544,8 +1543,7 @@ void hvm_vcpu_destroy(struct vcpu *v)
>      tasklet_kill(&v->arch.hvm_vcpu.assert_evtchn_irq_tasklet);
>      hvm_funcs.vcpu_destroy(v);
> 
> -    if ( is_hvm_vcpu(v) )
> -        vlapic_destroy(v);
> +    vlapic_destroy(v);
> 
>      hvm_vcpu_cacheattr_destroy(v);
>  }
> @@ -1711,7 +1709,6 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
> unsigned long gla,
>       * - newer Windows (like Server 2012) for HPET accesses.
>       */
>      if ( !nestedhvm_vcpu_in_guestmode(curr)
> -         && is_hvm_domain(currd)
>           && hvm_mmio_internal(gpa) )
>      {
>          if ( !handle_mmio_with_translation(gla, gpa >> PAGE_SHIFT, npfec) )
> @@ -3139,7 +3136,7 @@ static enum hvm_copy_result __hvm_copy(
>           * - 32-bit WinXP (& older Windows) on AMD CPUs for LAPIC accesses,
>           * - newer Windows (like Server 2012) for HPET accesses.
>           */
> -        if ( v == current && is_hvm_vcpu(v)
> +        if ( v == current
>               && !nestedhvm_vcpu_in_guestmode(v)
>               && hvm_mmio_internal(gpa) )
>              return HVMCOPY_bad_gfn_to_mfn;
> @@ -3971,12 +3968,12 @@ static int hvmop_set_evtchn_upcall_vector(
>      struct domain *d = current->domain;
>      struct vcpu *v;
> 
> -    if ( copy_from_guest(&op, uop, 1) )
> -        return -EFAULT;
> -
>      if ( !is_hvm_domain(d) )
>          return -EINVAL;
> 
> +    if ( copy_from_guest(&op, uop, 1) )
> +        return -EFAULT;
> +
>      if ( op.vector < 0x10 )
>          return -EINVAL;
> 
> --
> 2.1.4

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.