WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH 4/7] bio-cgroup: Split the cgroup memory subsyste

Hi, Andrea,

> you can remove some ifdefs doing:

I think you don't have to care about this much, since one of the following
patches removes most of these ifdefs.

> #ifdef CONFIG_CGROUP_MEM_RES_CTLR
>       if (likely(!memcg)) {
>               rcu_read_lock();
>               mem = mem_cgroup_from_task(rcu_dereference(mm->owner));
>               /*
>                * For every charge from the cgroup, increment reference count
>                */
>               css_get(&mem->css);
>               rcu_read_unlock();
>       } else {
>               mem = memcg;
>               css_get(&memcg->css);
>       }
>       while (res_counter_charge(&mem->res, PAGE_SIZE)) {
>               if (!(gfp_mask & __GFP_WAIT))
>                       goto out;
> 
>               if (try_to_free_mem_cgroup_pages(mem, gfp_mask))
>                       continue;
> 
>               /*
>                * try_to_free_mem_cgroup_pages() might not give us a full
>                * picture of reclaim. Some pages are reclaimed and might be
>                * moved to swap cache or just unmapped from the cgroup.
>                * Check the limit again to see if the reclaim reduced the
>                * current usage of the cgroup before giving up
>                */
>               if (res_counter_check_under_limit(&mem->res))
>                       continue;
> 
>               if (!nr_retries--) {
>                       mem_cgroup_out_of_memory(mem, gfp_mask);
>                       goto out;
>               }
>       }
>       pc->mem_cgroup = mem;
> #endif /* CONFIG_CGROUP_MEM_RES_CTLR */
> _______________________________________________
> Containers mailing list
> Containers@xxxxxxxxxxxxxxxxxxxxxxxxxx
> https://lists.linux-foundation.org/mailman/listinfo/containers

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>