This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] A race condition in xenlinux exit_mmap

To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>, <hanzhu@xxxxxxxxxxx>
Subject: [Xen-devel] A race condition in xenlinux exit_mmap
From: "Li, Xin B" <xin.b.li@xxxxxxxxx>
Date: Tue, 1 Aug 2006 17:21:35 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 01 Aug 2006 02:22:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Aca1S+LPaoi2s62gRfm8PKdsfp0wkg==
Thread-topic: A race condition in xenlinux exit_mmap
Our QA team reported an issue that "Destroying VMX with 4G memory may
make xend hang on IA-32e" with xenlinux complains:

Eeek! page_mapcount(page) went negative! (-1)
  page->flags = 14
  page->count = 0
  page->mapping = 0000000000000000

This bug is caused by a race condition in xenlinux exit_mmap:

void exit_mmap(struct mm_struct *mm)
        struct mmu_gather *tlb;
        struct vm_area_struct *vma = mm->mmap;
        unsigned long nr_accounted = 0;
        unsigned long end;

#ifdef arch_exit_mmap

        tlb = tlb_gather_mmu(mm, 1);
        /* Don't update_hiwater_rss(mm) here, do_exit already did */
        /* Use -1 here to ensure all VMAs in the mm are unmapped */
        end = unmap_vmas(&tlb, vma, 0, -1, &nr_accounted, NULL);
        free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, 0);
        tlb_finish_mmu(tlb, 0, end);

here, arch_exit_mmap will then unpin the pgtable of QEMU-DM and put the
pages residing in this pgtable. This leads to the pages mapped by
xc_map_foreign_range are returned back to xen heap. If these pages are
allocated by DOM0 before unmap_vmas is executed, the bug jumps out and
bites us since it will fail the sanity check in zap_pte_range and

2 possible solutions are:
1) call arch_exit_mmap after unmap_vmas.
2) unmap foreign mapped pages before calling arch_exit_mmap, and then we
can do the normal cleanup jobs.

Any comments?

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>