[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 4/9] mm: Scrub memory from idle loop



On Fri, 2017-04-14 at 11:37 -0400, Boris Ostrovsky wrote:
> Instead of scrubbing pages during guest destruction (from
> free_heap_pages()) do this opportunistically, from the idle loop.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> ---
> Changes in v3:
> * If memory-only nodes exist, select the closest one for scrubbing
> * Don't scrub from idle loop until we reach SYS_STATE_active.
> 
>  xen/arch/arm/domain.c   |   13 ++++--
>  xen/arch/x86/domain.c   |    3 +-
>  xen/common/page_alloc.c |   98
> +++++++++++++++++++++++++++++++++++++++++-----
>  xen/include/xen/mm.h    |    1 +
>  4 files changed, 98 insertions(+), 17 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 76310ed..38d6331 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -46,13 +46,16 @@ void idle_loop(void)
>          if ( cpu_is_offline(smp_processor_id()) )
>              stop_cpu();
>  
> -        local_irq_disable();
> -        if ( cpu_is_haltable(smp_processor_id()) )
> +        if ( !scrub_free_pages() )
>          {
> -            dsb(sy);
> -            wfi();
> +            local_irq_disable();
> +            if ( cpu_is_haltable(smp_processor_id()) )
> +            {
> +                dsb(sy);
> +                wfi();
> +            }
> +            local_irq_enable();
>          }
> -        local_irq_enable();
>  
>          do_tasklet();
>          do_softirq();
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 90e2b1f..a5f62b5 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -118,7 +118,8 @@ static void idle_loop(void)
>      {
>          if ( cpu_is_offline(smp_processor_id()) )
>              play_dead();
> -        (*pm_idle)();
> +        if ( !scrub_free_pages() )
> +            (*pm_idle)();
>          do_tasklet();
>
This means that, if we got here to run a tasklet (as in, if the idle
vCPU has been forced into execution, because there were a vCPU context
tasklet wanting to run), we will (potentially) do some scrubbing first.

Is this on purpose, and, in any case, ideal? vCPU context tasklets are
not terribly common, but I still don't think it is (ideal).

Not sure how to address this, though. What (the variants of) pm_idle()
uses for deciding whether or not to actually go to sleep is
cpu_is_haltable(), which checks per_cpu(tasklet_work_to_do, cpu):

/*
 * Used by idle loop to decide whether there is work to do:
 *  (1) Run softirqs; or (2) Play dead; or (3) Run tasklets.
 */
#define cpu_is_haltable(cpu)                    \
    (!softirq_pending(cpu) &&                   \
     cpu_online(cpu) &&                         \
     !per_cpu(tasklet_work_to_do, cpu))

Pulling it out/adding a call to it (cpu_is_haltable()) is ugly, and
probably not what we want (e.g., it's always called with IRQs disabled,
while they're on here).

Maybe we can test tasklet_work_to_do, before calling scrub_free_pages()
(also ugly, IMO).
Or, if scrub_free_pages() is, and always will be, called only from
here, within the idle loop, test tasklet_work_to_do inside, similarly
to what it does already for pending softirqs...

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.