This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] 3.0.2-testing: pci_set_dma_mask, pci_set_consistent_dma_

On 12 Apr 2006, at 19:53, Stephen C. Tweedie wrote:

On raw metal, when we start to get low on a specific memory zone, either for DMA24/DMA32 or on a specific NUMA node, we can start to specifically
reclaim pages from that memory zone, swapping them out or simply
evicting cache.

If the Xen HV runs out of MEMZONE_DMADOM pages, aren't we basically out
of luck right now?  Xen guests can't see that shortage, nor does the
vmscan.c code have any code to target pages for stealing based on MFN
rather than PFN.

Yes, if that happens then we could be in trouble. Although mostly low memory is allocated for devices only at start of day. The main fly in the ointment is PAE pgds. We get around that right now by reserving a lowmem pool in Xen that normal allocations cannot fall back to. That sorts out most kinds of bad behaviour. We would need the guest kernels to help out with reclamation (at least for guests not on shadow page tables). If Linux had support for hotplug memory (add and remove) I suspect we could make use of that to help out, if we were careful.

 -- Keir

Xen-devel mailing list