[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1 of 8] x86/mm: Fix paging_load



On Thu, Jan 26, Andres Lagar-Cavilla wrote:

> Now, afaict, the p2m_ram_paging_in state is not needed anymore. Can you
> provide feedback as to whether
> 1. remove p2m_ram_paging_in
> 2. rename p2m_ram_paging_in_start to p2m_ram_paging_in
> 
> sounds like a good plan?

In my opinion the common case is that evicted pages get populated, an
request is sent. Later an response is expected to make room in the ring.

If p2m_mem_paging_populate allocates a page for the guest, it can let
the pager know that it did so (or failed to allocate one).
If there is a page already, the pager can copy the gfn content into a
buffer, put a pointer to it in the response and let
p2m_mem_paging_resume() handle both the ring accounting (as it does now)
and also the copy_from_user.
If page allocation failed, the pager has to allocate one via
p2m_mem_paging_prep() as it is done now, as an intermediate step.

The buffer page handling in the pager is probably simple, it needs to
maintain RING_SIZE() buffers. There cant be more than that in flight
because thats the limit of requests as well. In other words, the pager
does not need to wait for p2m_mem_paging_resume() to run and pull the
buffer content.


If the "populate - allocate - put_request - get_request - fill_buffer -
put_response - resume  get_response - copy_from_buffer - resume_vcpu"
cycle works, it would reduce the overall amount of work to be done
during paging, even if the hypercalls itself are not the bottleneck.
It all depends on the possibility to allocate a page in the various
contexts where p2m_mem_paging_populate is called.

The resume part could be done via eventchannel and
XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME could be removed.

Also the question is if freeing one p2mt is more important than reducing
the number if hypercalls to execute at runtime.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.