[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 11/18] argo: implement the register op



Hi Christopher,

On 21/12/2018 01:17, Christopher Clark wrote:
On Thu, Dec 20, 2018 at 3:20 AM Julien Grall <julien.grall@xxxxxxx> wrote:

Hi Christopher,

On 12/20/18 6:39 AM, Christopher Clark wrote:
Used by a domain to register a region of memory for receiving messages from
either a specified other domain, or, if specifying a wildcard, any domain.

This operation creates a mapping within Xen's private address space that
will remain resident for the lifetime of the ring. In subsequent commits,
the hypervisor will use this mapping to copy data from a sending domain into
this registered ring, making it accessible to the domain that registered the
ring to receive data.

In this code, the p2m type of the memory supplied by the guest for the ring
must be p2m_ram_rw, which is a conservative choice made to defer the need to
reason about the other p2m types with this commit.

xen_argo_page_descr_t type is introduced as a page descriptor, to convey
both the physical address of the start of the page and its granularity. The
smallest granularity page is assumed to be 4096 bytes and the lower twelve
bits of the type are used for indicate an enumerated page size.

I haven't seen any reply from you on my concern with this approach (see
[1]).

For convenience, I will duplicate the message here.

Hi Julien,

Thanks for the reminder.

If you let the user the choice of the granularity, then, I believe, you
will prevent the hypervisor to do some optimization.

OK, let's work through this then.

For instance, if the guest supplies only 4KB page but the hypervisor is
64KB. There are no way to easily map them contiguously in the hypervisor
(e.g using vmap).

Right. So with the matrix:

4K guest, 4K xen : fine.
4K guest, 64K xen : contiguous guest physical chunks or region required.
64K guest, 4K xen : weird? seems doable.

It is not weird, 64KB split nicely into 16 4KB chunk. Actually, Linux upstream has all the support for to run with 64KB pages on current Xen.

64K guest, 64K xen : fine (with some work).

as you note, the 4K guest, 64K hypervisor case is the one that
raises the question.

That's correct. To generalize the problem, the problem will happen whenever the guest page size is smaller than the Xen page size.


Is there a particular reason to allow the ring buffer to be
non-contiguous in the guest physical address?

It hasn't been a necessary restriction up to this point, and isn't
so on the platforms we're deploying on, so my preference is not to
introduce it as an additional requirement if it can be avoided. It
allows us to use vmalloc (rather than kmalloc) on Linux, which is
helpful.

vmalloc might be an issue on Arm if we request 64KB chunk of physical memory. Although I don't know the vmalloc implementation to be able to say whether this can be addressed.


There can be high turnover in ring registration for a server with
many short-lived connections. While the rings are not necessarily
large -- the default is 128K in the current Linux driver, though
clients can change what they use -- contiguous memory regions are a
more limited resource for the kernel to manage, and avoiding
pressure on that contiguous region allocator when it isn't necessary
is preferable.

We also do not want to disincentivize a server that is seeking to
improve performance from registering larger rings -- so allowing
non-contiguous regions fits with that.

I'd have to study the Linux driver further to say whether there
are stronger additional requirements that I'm not currently aware
of, but I don't know of any at the moment.

Thank you for the detailed explanation. So I think my option 1) below would suit you the best here.


Depending on the answer, there are different way to handle that:
1) Request the guest to allocate memory using 64KB (on Arm) chunk and
pass the base address for each chunk
2) Request the guest to allocate contiguously the buffer and pass the
base address and size

I understand that #2 would avoid the need to describe a contiguous
allocation of memory as a series of chunks; but I think #1 is the
option I would select. Do you think that would be acceptable?

1) is a good option for me. I forgot to mention the base address would need to be aligned to 64KB.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.