[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v1 58/74] xen/pvshim: add migration support



>>> On 04.01.18 at 14:06, <wei.liu2@xxxxxxxxxx> wrote:
> +void hypervisor_resume(void)
> +{
> +    /* Reset shared info page. */
> +    map_shared_info();
> +
> +    /*
> +     * Reset vcpu_info. Just clean the mapped bitmap and try to map the vcpu
> +     * area again. On failure to map (when it was previously mapped) panic
> +     * since it's impossible to safely shut down running guest vCPUs in order
> +     * to meet the new XEN_LEGACY_MAX_VCPUS requirement.
> +     */
> +    memset(vcpu_info_mapped, 0, sizeof(vcpu_info_mapped));

bitmap_zero() would seem the more natural function to use here.

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -466,8 +466,7 @@ void share_xen_page_with_guest(
>      spin_unlock(&d->page_alloc_lock);
>  }
>  
> -int __init unshare_xen_page_with_guest(struct page_info *page,
> -                                       struct domain *d)
> +int unshare_xen_page_with_guest(struct page_info *page, struct domain *d)
>  {

The function is - afaict - not generally safe to use in its current
shape; I've recently made an attempt at making it generic, but
iirc I wasn't able to make it fully race free. Therefore, to prevent
use in the wrong context, please retain the __init here when
not building a shim.

> --- a/xen/arch/x86/pv/shim.c
> +++ b/xen/arch/x86/pv/shim.c
> @@ -151,10 +151,167 @@ void __init pv_shim_setup_dom(struct domain *d, 
> l4_pgentry_t *l4start,
>      }
>  }
>  
> -void pv_shim_shutdown(uint8_t reason)
> +static void write_start_info(struct domain *d)
>  {
> -    /* XXX: handle suspend */
> -    xen_hypercall_shutdown(reason);
> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    start_info_t *si = map_domain_page(_mfn(is_pv_32bit_domain(d) ? regs->edx
> +                                                                  : 
> regs->rdx));
> +    uint64_t param;
> +
> +    BUG_ON(!si);

map_domain_page() can't fail, so this is pointless.

> +    snprintf(si->magic, sizeof(si->magic), "xen-3.0-x86_%s",
> +             is_pv_32bit_domain(d) ? "32p" : "64");
> +    si->nr_pages = d->tot_pages;
> +    si->shared_info = virt_to_maddr(d->shared_info);
> +    si->flags = (xen_processor_pmbits << 8) & SIF_PM_MASK;

This appears to be pointless (and irritating) in the context of the
shim.

> +    BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_STORE_PFN, &si->store_mfn));
> +    BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_STORE_EVTCHN, &param));
> +    si->store_evtchn = param;
> +    BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_CONSOLE_EVTCHN, &param));
> +    si->console.domU.evtchn = param;
> +    if ( !pv_console )
> +        BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_CONSOLE_PFN,
> +                                           &si->console.domU.mfn));
> +    else
> +        si->console.domU.mfn = virt_to_mfn(consoled_get_ring_addr());

Generally we prefer to avoid side effects in BUG_ON()s, so
perhaps better

    if ( pv_console )
        si->console.domU.mfn = virt_to_mfn(consoled_get_ring_addr());
    else if ( xen_hypercall_hvm_get_param(HVM_PARAM_CONSOLE_PFN,
                                           &si->console.domU.mfn) )
        BUG();

Considering it, I think the same (mostly cosmetic) issues exists
earlier in the series.

> +int pv_shim_shutdown(uint8_t reason)
> +{
> +    long rc;
> +
> +    if ( reason == SHUTDOWN_suspend )
> +    {

Reduce indentation of almost the entire function body by one
level by doing

    if ( reason != SHUTDOWN_suspend )
        /* Forward to L0. */
        return xen_hypercall_shutdown(reason);

?

> +        struct domain *d = current->domain;
> +        struct vcpu *v;
> +        unsigned int i;
> +        uint64_t old_store_pfn, old_console_pfn = 0, store_pfn, console_pfn;
> +        uint64_t store_evtchn, console_evtchn;
> +
> +        BUG_ON(current->vcpu_id != 0);
> +
> +        BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_STORE_PFN,
> +                                           &old_store_pfn));
> +        if ( !pv_console )
> +            BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_CONSOLE_PFN,
> +                                               &old_console_pfn));
> +
> +        /* Pause the other vcpus before starting the migration. */
> +        for_each_vcpu(d, v)
> +            if ( v != current )
> +                vcpu_pause_by_systemcontroller(v);
> +
> +        rc = xen_hypercall_shutdown(SHUTDOWN_suspend);
> +        if ( rc )
> +        {
> +            for_each_vcpu(d, v)
> +                if ( v != current )
> +                    vcpu_unpause_by_systemcontroller(v);
> +
> +            return rc;
> +        }
> +
> +        /* Resume the shim itself first. */
> +        hypervisor_resume();
> +
> +        /*
> +         * ATM there's nothing Xen can do if the console/store pfn changes,
> +         * because Xen won't have a page_info struct for it.
> +         */
> +        BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_STORE_PFN,
> +                                           &store_pfn));
> +        BUG_ON(old_store_pfn != store_pfn);
> +        if ( !pv_console )
> +        {
> +            BUG_ON(xen_hypercall_hvm_get_param(HVM_PARAM_CONSOLE_PFN,
> +                                               &console_pfn));
> +            BUG_ON(old_console_pfn != console_pfn);
> +        }
> +
> +        /* Update domain id. */
> +        d->domain_id = get_dom0_domid();
> +
> +        /* Clean the iomem range. */
> +        BUG_ON(iomem_deny_access(d, 0, ~0UL));

Does this rangeset change across migration?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.