[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [BUG] Xen-ballooned memory never returned to domain after partial-free


The issue I'm seeing is that pages of previously-xenballooned memory are getting
trapped in the balloon on free, specifically when they are free'd in batches
(i.e. not all at once). The first batch is restored to the domain properly, but
subsequent frees are not.

Truthfully I'm not sure if this is a bug or not, but the behavior I'm seeing
doesn't seem to make sense. Note that this "bug" is in the balloon driver, but
the behavior is seen when using the gnttab API, which utilizes the balloon in
the background.


This issue is better illustrated as an example, seen below. Note that the file
in question is drivers/xen/balloon.c:

Kernel version: 4.19.*, code seems consistent on master as well
Relevant configs:

* current_pages = # of pages assigned to domain
* target_pages = # of pages we want assigned to domain
* credit = target - current

Start with current_pages/target_pages = 20 pages

1. alloc 5 pages with gnttab_alloc_pages(). current_pages = 15, credit = 5.
2. alloc 3 pages with gnttab_alloc_pages(). current_pages = 12, credit = 8.
3. some time later, free the last 3 pages with gnttab_free_pages().
4. 3 pages go back to balloon and balloon worker is scheduled since credit > 0.
    * Relevant part of balloon worker shown below:

    do {

        credit = current_credit();

        if (credit > 0) {
            if (balloon_is_inflated())
                state = increase_reservation(credit);
                state = reserve_additional_memory();


    } while (credit && state == BP_DONE);

5. credit > 0 and the balloon contains 3 pages, so run increase_reservation. 3
   pages are restored to domain, correctly. current_pages = 15, credit = 5.
6. at this point credit is still > 0, so we loop again.
7. this time, the balloon has 0 pages, so we call reserve_additional_memory,
   seen below. note that CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is disabled, so this
   funciton is very sparse.

    static enum bp_state reserve_additional_memory(void)
        balloon_stats.target_pages = balloon_stats.current_pages;
        return BP_ECANCELED;

8. now target = current = 15, which drops our credit down to 0.
9. at some point later we attempt to free the remaining 5 pages with
10. 5 pages go back into the balloon, but this time credit = 0, so we never
    trigger our balloon worker (it wouldn't do anything anyway).
11. since we've essentially irreversibly decreased target_pages, we'll never
    attempt to re-add those pages to our domain, and those pages are reserved
    in the balloon forever.
12. this can be verified by running "free", "cat /proc/meminfo", etc. to show
    that the total memory has indeed decreased permanently until host reboot.

Is this desired behavior? Why would we decrease our target pages if there's no
way to restore them? I understand there is a helper function to manually reset
the target, but the caller would need to manually keep track of the starting
pages; that seems like unnecessary maintenance that the balloon should handle.

Additionally, why should any of the above code be possible if we have memory
hotplugging disabled? I'm surprised we are able to balloon any memory out from
the domain in the first place. I would have expected gnttab_alloc_pages to fail.

Please CC niko.tsirakis@xxxxxxxxx on any replies. Thank you,


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.