[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Ping: Ping#3: [PATCH v3] x86/HVM: don't #GP/#SS on wrapping virt->linear translations



>>> On 06.12.17 at 08:44,  wrote:
>>>> On 04.12.17 at 17:39, <andrew.cooper3@xxxxxxxxxx> wrote:
> > On 04/12/17 10:16, Jan Beulich wrote:
> >>>>> On 25.08.17 at 16:59,  wrote:
> >>>>>> On 10.08.17 at 09:19, <JBeulich@xxxxxxxx> wrote:
> >>>>>>> On 10.07.17 at 12:39, <JBeulich@xxxxxxxx> wrote:
> >>>>> Real hardware wraps silently in most cases, so we should behave the
> >>>>> same. Also split real and VM86 mode handling, as the latter really
> >>>>> ought to have limit checks applied.
> >>>>>
> >>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >>>>> ---
> >>>>> v3: Restore 32-bit wrap check for AMD.
> >>>>> v2: Extend to non-64-bit modes. Reduce 64-bit check to a single
> >>>>>     is_canonical_address() invocation.
> >> Same here - I think I've been carrying this for long enough.
> > 
> > I'm not sure what to say.  I'm not comfortable taking this change
> > without a regression test in place, which also serves to demonstrate the
> > correctness of the change.
> > 
> > Its simply a matter of time, not any other objection to the change.
> 
> Well, I had sent you a tentative XTF test long ago (non-publicly
> at the time, I believe). Here it is again. I'll send a second change
> in a minute, which iirc is necessary as prereq to the one here.
> 
> Jan

No matter that hopefully no-one will exercise us currently getting
things wrong, I'd still like to re-raise the fact that the original bug
fix in this thread has been pending for a really long time, and this
XTF test has now also been sent almost 3 months ago.

Jan

> add split memory access tests
> 
> Add tests to verify that accesses crossing the upper address boundary
> are being handled similarly with and without the emulator involved.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> v2: Use FS overrides. Add 64-bit and PV tests. Remove stray '-s from
>     log messages. Add "X" (ex_record_fault_eax) constraints.
> 
> --- /dev/null
> +++ b/tests/split-access/Makefile
> @@ -0,0 +1,9 @@
> +include $(ROOT)/build/common.mk
> +
> +NAME      := split-access
> +CATEGORY  := functional
> +TEST-ENVS := $(ALL_ENVIRONMENTS)
> +
> +obj-perenv += main.o
> +
> +include $(ROOT)/build/gen.mk
> --- /dev/null
> +++ b/tests/split-access/main.c
> @@ -0,0 +1,251 @@
> +/**
> + * @file tests/split-access/main.c
> + * @ref test-split-access
> + *
> + * @page test-split-access split-access
> + *
> + * @todo Docs for test-split-access
> + *
> + * @see tests/split-access/main.c
> + */
> +#include <xtf.h>
> +
> +#include <arch/decode.h>
> +#include <arch/pagetable.h>
> +
> +const char test_title[] = "Split memory access insns";
> +
> +const void *volatile boundary = NULL;
> +
> +/* Keep the compiler from leveraging undefined behavior. */
> +#define touch(x) ({ asm ( "" : "+g" (x) ); })
> +
> +void do_mov(bool force)
> +{
> +    const unsigned long *ptr = boundary;
> +
> +    touch(ptr);
> +    for ( --ptr; ; )
> +    {
> +        unsigned long val;
> +        exinfo_t fault = 0;
> +
> +        asm volatile ( "test %[fep], %[fep];"
> +                       "jz 1f;"
> +                       _ASM_XEN_FEP
> +                       "1: mov %%fs:%[src],%[dst]; 2:"
> +                       _ASM_EXTABLE_HANDLER(1b, 2b, ex_record_fault_eax)
> +                       : [dst] "=r" (val), "+a" (fault)
> +                       : [src] "m" (*ptr), [fep] "q" (force),
> +                         "X" (ex_record_fault_eax) );
> +        if ( fault )
> +            xtf_warning("Got %pe for %p\n", _p(fault), ptr);
> +        else if ( val != *ptr )
> +            xtf_failure("%lx != %lx for %p\n", val, *ptr, ptr);
> +
> +        touch(ptr);
> +        if ( ptr == boundary )
> +            break;
> +
> +        ptr = (void *)(long)ptr + 1;
> +    }
> +}
> +
> +void do_lfs(bool force)
> +{
> +    const struct __packed { unsigned long off; uint16_t sel; } *ptr = 
> boundary;
> +
> +    touch(ptr);
> +    for ( --ptr; ; )
> +    {
> +        unsigned long off;
> +        exinfo_t fault = 0;
> +
> +        asm volatile ( "test %[fep], %[fep];"
> +                       "jz 1f;"
> +                       _ASM_XEN_FEP
> +                       "1: lfs %%fs:%[src],%[dst]; 2:"
> +                       _ASM_EXTABLE_HANDLER(1b, 2b, ex_record_fault_eax)
> +                       : [dst] "=r" (off), "+a" (fault)
> +                       : [src] "m" (*ptr), [fep] "q" (force),
> +                         "X" (ex_record_fault_eax) );
> +        if ( fault )
> +            xtf_warning("Got %pe for %p\n", _p(fault), ptr);
> +        else if ( off != ptr->off )
> +            xtf_failure("%lx != %lx for %p\n", off, ptr->off, ptr);
> +
> +        touch(ptr);
> +        if ( ptr == boundary )
> +            break;
> +
> +        ptr = (void *)(long)ptr + 1;
> +    }
> +}
> +
> +#ifdef CONFIG_HVM
> +void do_lidt(bool force)
> +{
> +    const desc_ptr *ptr = boundary;
> +
> +    touch(ptr);
> +    for ( --ptr; ; )
> +    {
> +        exinfo_t fault = 0;
> +
> +        asm volatile ( "test %[fep], %[fep];"
> +                       "jz 1f;"
> +                       _ASM_XEN_FEP
> +                       "1: lidt %%fs:%[src]; 2:"
> +                       _ASM_EXTABLE_HANDLER(1b, 2b, ex_record_fault_eax)
> +                       : "+a" (fault)
> +                       : [src] "m" (*ptr), [fep] "q" (force),
> +                         "X" (ex_record_fault_eax) );
> +        if ( fault )
> +            xtf_warning("Got %pe for %p\n", _p(fault), ptr);
> +        else
> +            asm volatile ( "lidt %0" :: "m" (idt_ptr) );
> +
> +        touch(ptr);
> +        if ( ptr == boundary )
> +            break;
> +
> +        ptr = (void *)(long)ptr + 1;
> +    }
> +}
> +#endif
> +
> +#ifndef __x86_64__
> +void do_bound(bool force)
> +{
> +    const struct { unsigned long lo, hi; } *ptr = boundary;
> +
> +    touch(ptr);
> +    for ( --ptr; ; )
> +    {
> +        exinfo_t fault = 0;
> +
> +        asm volatile ( "test %[fep], %[fep];"
> +                       "jz 1f;"
> +                       _ASM_XEN_FEP
> +                       "1: bound %[off], %%fs:%[bnd]; 2:"
> +                       _ASM_EXTABLE_HANDLER(1b, 2b, ex_record_fault_eax)
> +                       : "+a" (fault)
> +                       : [bnd] "m" (*ptr), [off] "r" (0), [fep] "q" (force),
> +                         "X" (ex_record_fault_eax) );
> +        if ( fault )
> +            xtf_warning("Got %pe for %p\n", _p(fault), ptr);
> +
> +        touch(ptr);
> +        if ( ptr == boundary )
> +            break;
> +
> +        ptr = (void *)(long)ptr + 1;
> +    }
> +}
> +#endif
> +
> +void run_tests(bool force)
> +{
> +    printk("Testing%s MOV\n", force ? " emulated" : "");
> +    do_mov(force);
> +
> +    printk("Testing%s LFS\n", force ? " emulated" : "");
> +    do_lfs(force);
> +
> +#ifdef CONFIG_HVM
> +    printk("Testing%s LIDT\n", force ? " emulated" : "");
> +    do_lidt(force);
> +#endif
> +
> +#ifndef __x86_64__
> +    printk("Testing%s BOUND\n", force ? " emulated" : "");
> +    do_bound(force);
> +#endif
> +}
> +
> +void test_main(void)
> +{
> +#if defined(__x86_64__)
> +    if ( !boundary )
> +    {
> +        asm volatile ( "push $0; pop %%fs" ::: "memory" );
> +
> +# if CONFIG_PAGING_LEVELS == 4
> +        boundary = (void *)(1L << 47);
> +# elif CONFIG_PAGING_LEVELS == 5
> +        boundary = (void *)(1L << 56);
> +# else
> +#  error Unknown 64-bit paging mode!
> +# endif
> +        printk("Testing at lower canonical boundary\n");
> +        test_main();
> +
> +        boundary = NULL;
> +        printk("Testing at upper address boundary\n");
> +    }
> +#elif defined(CONFIG_PV)
> +    /* Shrink %fs limit to below the compat limit. */
> +    static struct seg_desc32 __page_aligned_data desc[] = {
> +        [1] = {
> +            .limit0 = 0x4fff, .limit1 = 0xf, .g = 1,
> +            .p = 1, .s = 1, .type = 3, .dpl = 1,
> +        },
> +    };
> +    unsigned long frame = virt_to_mfn(desc);
> +    int rc;
> +
> +    rc = hypercall_update_va_mapping((unsigned long)desc,
> +                                     pte_from_gfn(frame,
> +                                                  _PAGE_PRESENT|_PAGE_AD),
> +                                     0);
> +    if ( !rc )
> +        rc = HYPERCALL2(int, __HYPERVISOR_set_gdt, &frame, ARRAY_SIZE(desc));
> +    if ( rc )
> +    {
> +        xtf_error("Cannot set GDT entry: %d\n", rc);
> +        return;
> +    }
> +
> +    asm volatile ( "mov %1, %%fs; lsl %1, %0"
> +                   : "=r" (boundary)
> +                   : "r" (sizeof(*desc) | 1)
> +                   : "memory" );
> +#else
> +    /*
> +     * To better tell actual hardware behavior, zap the mapping for the last
> +     * (large) page below 4Gb. That'll make us see page faults on hardware
> +     * when all segmentation checks pass, rather than observing #GP/#SS due 
> to
> +     * the emulator being invoked anyway due to accesses touching an unmapped
> +     * MMIO range. This matches x86-64 behavior at the 2^^64 boundary.
> +     */
> +# if CONFIG_PAGING_LEVELS == 2
> +    pse_l2_identmap[pse_l2_table_offset(~0UL)] = 0;
> +# elif CONFIG_PAGING_LEVELS == 3
> +    pae_l2_identmap[pae_l2_table_offset(~0UL)] = 0;
> +# elif CONFIG_PAGING_LEVELS
> +#  error Unknown 32-bit paging mode!
> +# endif
> +
> +    invlpg((void *)~0UL);
> +    asm volatile ( "push %%ds; pop %%fs" ::: "memory" );
> +#endif
> +
> +    run_tests(false);
> +
> +    if ( !xtf_has_fep )
> +        xtf_skip("FEP support not detected - some tests will be skipped\n");
> +    else
> +        run_tests(true);
> +
> +    xtf_success(NULL);
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.