[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v1 62/74] xen/pvshim: memory hotplug



>>> On 04.01.18 at 14:06, <wei.liu2@xxxxxxxxxx> wrote:
> @@ -814,6 +817,113 @@ long pv_shim_cpu_down(void *data)
>      return 0;
>  }
>  
> +static unsigned long batch_memory_op(int cmd, struct page_list_head *list)

unsigned int cmd, const struct ...

> +{
> +    struct xen_memory_reservation xmr = {
> +        .domid = DOMID_SELF,
> +    };
> +    unsigned long pfns[64];
> +    struct page_info *pg;

As long as you don't modify the list, this too can be const.

> +void pv_shim_online_memory(unsigned int nr, unsigned int order)
> +{
> +    struct page_info *page, *tmp;
> +    PAGE_LIST_HEAD(list);
> +
> +    spin_lock(&balloon_lock);
> +    page_list_for_each_safe ( page, tmp, &balloon )
> +    {
> +            if ( page->v.free.order != order )
> +                continue;

Since guests (afaik) only ever balloon order-0 pages, this is fine
for now. But it's insufficient in general - there's no point failing
a request when there's no exact match available, but a higher
order one is (which could be split).

> +            page_list_del(page, &balloon);
> +            page_list_add_tail(page, &list);
> +            if ( !--nr )
> +                break;
> +    }
> +    spin_unlock(&balloon_lock);
> +
> +    if ( nr )
> +        gprintk(XENLOG_WARNING,
> +                "failed to allocate %u extents of order %u for onlining\n",
> +                nr, order);
> +
> +    nr = batch_memory_op(XENMEM_populate_physmap, &list);

You need to pass order into the function (and use it there).

> +    while ( nr-- )
> +    {
> +        BUG_ON((page = page_list_remove_head(&list)) == NULL);
> +        free_domheap_pages(page, order);
> +    }
> +
> +    if ( !page_list_empty(&list) )
> +    {
> +        gprintk(XENLOG_WARNING,
> +                "failed to online some of the memory regions\n");
> +        spin_lock(&balloon_lock);
> +        while ( (page = page_list_remove_head(&list)) != NULL )
> +            page_list_add_tail(page, &balloon);

page_list_splice()?

> @@ -993,6 +997,11 @@ long do_memory_op(unsigned long cmd, 
> XEN_GUEST_HANDLE_PARAM(void) arg)
>              return start_extent;
>          }
>  
> +#ifdef CONFIG_X86
> +        if ( pv_shim && op != XENMEM_decrease_reservation && !args.nr_done )
> +            pv_shim_online_memory(args.nr_extents, args.extent_order);
> +#endif
> +
>          switch ( op )
>          {
>          case XENMEM_increase_reservation:
> @@ -1015,6 +1024,11 @@ long do_memory_op(unsigned long cmd, 
> XEN_GUEST_HANDLE_PARAM(void) arg)
>                  __HYPERVISOR_memory_op, "lh",
>                  op | (rc << MEMOP_EXTENT_SHIFT), arg);
>  
> +#ifdef CONFIG_X86
> +        if ( pv_shim && op == XENMEM_decrease_reservation )
> +            pv_shim_offline_memory(args.nr_extents, args.extent_order);
> +#endif

Looking at both of these changes - is it somewhere being made
sure that shim containers won't boot in PoD mode?

For the latter change - is this correct when the operation has been
preempted? I think you want to offline only the delta between
start and args.nr_done.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.