[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] x86/hyperv: L0 assisted TLB flush



On Mon, Feb 17, 2020 at 07:51:42PM +0000, Michael Kelley wrote:
> From: Wei Liu <wei.liu.xen@xxxxxxxxx> On Behalf Of Wei Liu
> 
> [snip]
> 
> > diff --git a/xen/arch/x86/guest/hyperv/util.c 
> > b/xen/arch/x86/guest/hyperv/util.c
> > new file mode 100644
> > index 0000000000..0abb37b05f
> > --- /dev/null
> > +++ b/xen/arch/x86/guest/hyperv/util.c
> > @@ -0,0 +1,74 @@
> > +/**************************************************************************
> > ****
> > + * arch/x86/guest/hyperv/util.c
> > + *
> > + * Hyper-V utility functions
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License as published by
> > + * the Free Software Foundation; either version 2 of the License, or
> > + * (at your option) any later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > + * GNU General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU General Public License
> > + * along with this program; If not, see https://www.gnu.org/licenses/.
> > + *
> > + * Copyright (c) 2020 Microsoft.
> > + */
> > +
> > +#include <xen/cpu.h>
> > +#include <xen/cpumask.h>
> > +#include <xen/errno.h>
> > +
> > +#include <asm/guest/hyperv.h>
> > +#include <asm/guest/hyperv-tlfs.h>
> > +
> > +#include "private.h"
> > +
> > +int cpumask_to_vpset(struct hv_vpset *vpset,
> > +                     const cpumask_t *mask)
> > +{
> > +    int nr = 1;
> > +    unsigned int cpu, vcpu_bank, vcpu_offset;
> > +    unsigned int max_banks = ms_hyperv.max_vp_index / 64;
> > +
> > +    /* Up to 64 banks can be represented by valid_bank_mask */
> > +    if ( max_banks > 64 )
> > +        return -E2BIG;
> > +
> > +    /* Clear all banks to avoid flushing unwanted CPUs */
> > +    for ( vcpu_bank = 0; vcpu_bank < max_banks; vcpu_bank++ )
> > +        vpset->bank_contents[vcpu_bank] = 0;
> > +
> > +    vpset->valid_bank_mask = 0;
> > +    vpset->format = HV_GENERIC_SET_SPARSE_4K;
> > +
> > +    for_each_cpu ( cpu, mask )
> > +    {
> > +        unsigned int vcpu = hv_vp_index(cpu);
> > +
> > +        vcpu_bank = vcpu / 64;
> > +        vcpu_offset = vcpu % 64;
> > +
> > +        __set_bit(vcpu_offset, &vpset->bank_contents[vcpu_bank]);
> > +        __set_bit(vcpu_bank, &vpset->valid_bank_mask);
> 
> This approach to setting the bits in the valid_bank_mask causes a bug.
> If an entire 64-bit word in the bank_contents array is zero because there
> are no CPUs in that range, the corresponding bit in valid_bank_mask still
> must be set to tell Hyper-V that the 64-bit word is present in the array
> and should be processed, even though the content is zero.  A zero bit
> in valid_bank_mask indicates that the corresponding 64-bit word in the
> array is not present, and every 64-bit word above it has been shifted down.
> That's why the similar Linux function sets valid_bank_mask the way that
> it does.

Oh, interesting. TLFS isn't explicit about this "shift-down" property. I
thought it presented the example as a compact array because they wanted
to not insert dozens of zeros.

I will fix this in the next version.

Wei.

> 
> Michael
> 
> > +
> > +        if ( vcpu_bank >= nr )
> > +            nr = vcpu_bank + 1;
> > +    }
> > +
> > +    return nr;
> > +}
> > +
> > +/*
> > + * Local variables:
> > + * mode: C
> > + * c-file-style: "BSD"
> > + * c-basic-offset: 4
> > + * tab-width: 4
> > + * indent-tabs-mode: nil
> > + * End:
> > + */
> > --
> > 2.20.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.