|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 4/4] x86/hyperv: L0 assisted TLB flush
On Thu, Feb 13, 2020 at 01:41:27PM +0100, Roger Pau Monné wrote:
> On Thu, Feb 13, 2020 at 12:20:33PM +0000, Wei Liu wrote:
> > On Wed, Feb 12, 2020 at 06:43:47PM +0100, Roger Pau Monné wrote:
> > > On Wed, Feb 12, 2020 at 04:09:18PM +0000, Wei Liu wrote:
> > > > +static uint64_t flush_tlb_ex(const cpumask_t *mask, const void *va,
> > > > + unsigned int flags)
> > > > +{
> > > > + struct hv_tlb_flush_ex *flush = this_cpu(hv_input_page);
> > > > + int nr_banks;
> > > > + unsigned int max_gvas;
> > > > + unsigned int order = flags & FLUSH_ORDER_MASK;
> > > > + uint64_t ret;
> > > > +
> > > > + ASSERT(flush);
> > > > + ASSERT(!local_irq_is_enabled());
> > >
> > > Can you turn this into an if condition with ASSERT_UNREACHABLE and
> > > return ~0ULL? (as I think that signals an error).
> > >
> >
> > There is no need for that. This function will always be internal to
> > Hyper-V in the foreseeable future. If it is ever called with IRQ enabled
> > something is wrong with the code.
>
> But iff it ever manages to be called violating one of those conditions
> things will go badly I assume?
>
> It would be better to stay on the safe side and simply return an error
> when the conditions are no meet, and assert in the debug build.
OK.
>
> >
> > > > +
> > > > + if ( !(ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED) )
> > > > + return ~0ULL;
> > > > +
> > > > + flush->address_space = 0;
> > > > + flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES;
> > > > + if ( !(flags & FLUSH_TLB_GLOBAL) )
> > > > + flush->flags |= HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY;
> > > > +
> > > > + flush->hv_vp_set.valid_bank_mask = 0;
> > > > + flush->hv_vp_set.format = HV_GENERIC_SET_SPARSE_4K;
> > > > +
> > > > + nr_banks = cpumask_to_vpset(&flush->hv_vp_set, mask);
> > > > + if ( nr_banks < 0 )
> > > > + return ~0ULL;
> > > > +
> > > > + max_gvas =
> > > > + (PAGE_SIZE - sizeof(*flush) - nr_banks *
> > > > + sizeof(flush->hv_vp_set.bank_contents[0])) /
> > > > + sizeof(uint64_t); /* gva is represented as uint64_t */
> > > > +
> > > > + /*
> > > > + * Flush the entire address space if va is NULL or if there is not
> > > > + * enough space for gva_list.
> > > > + */
> > > > + if ( !va || (ORDER_TO_BYTES(order) / HV_TLB_FLUSH_UNIT) > max_gvas
> > > > )
> > > > + ret =
> > > > hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX, 0,
> > > > + nr_banks, virt_to_maddr(flush), 0);
> > > > + else
> > > > + {
> > > > + uint64_t *gva_list = (uint64_t *)flush + sizeof(*flush) +
> > > > nr_banks;
> > >
> > > Don't you need nr_banks * sizeof(flush->hv_vp_set.bank_contents) in
> > > order to calculate the position of the gva_list?
> > >
> >
> > The pointer arithmetic is done on uint64_t pointers so it already takes
> > into account sizeof(bank_contents[0]).
>
> Oh, then the sizeof(*flush) should be divided by sizeof(uint64_t)?
>
Yes. I think so. Thanks for catching this.
[...]
> > > > + do_ex_hypercall:
> > > > + ret = flush_tlb_ex(mask, va, flags);
> > > > +
> > > > + done:
> > > > + local_irq_restore(irq_flags);
> > > > +
> > > > + return ret & HV_HYPERCALL_RESULT_MASK;
> > >
> > > Will this return an error code that uses the same space as Xen's errno
> > > values?
> > >
> >
> > No, it won't. It returns Hyper-V's status code (0 still means success).
> >
> > I didn't think that was a big deal because non-zero values meant errors.
> > And the upper layer didn't care about the exact error values (yet).
>
> Hm, I would rather have this return an error value in the errno.h
> range. ie:
>
> return ret & HV_HYPERCALL_RESULT_MASK ? -EINVAL : 0;
>
Sure this can be done. I would use ENXIO rather than EINVAL though.
> Or something along this lines, but long term you will need some kind
> of mapping between HyperV and Xen error codes IMO.
>
Yes. When we need more sophisticated handling of error codes.
Wei.
> Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |