[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 13/25] argo: implement the register op



Hi Christoffer,

On 04/12/2018 09:08, Christopher Clark wrote:
On Sun, Dec 2, 2018 at 12:11 PM Julien Grall <Julien.Grall@xxxxxxx> wrote:



On 01/12/2018 01:32, Christopher Clark wrote:
diff --git a/xen/include/public/argo.h b/xen/include/public/argo.h
index 20dabc0..5ad8e2b 100644
--- a/xen/include/public/argo.h
+++ b/xen/include/public/argo.h
@@ -21,6 +21,20 @@

   #include "xen.h"

+#define ARGO_RING_MAGIC      0xbd67e163e7777f2fULL
+
+#define ARGO_DOMID_ANY           DOMID_INVALID
+
+/*
+ * The maximum size of an Argo ring is defined to be: 16GB
+ *  -- which is 0x1000000 or 16777216 bytes.
+ * A byte index into the ring is at most 24 bits.
+ */
+#define ARGO_MAX_RING_SIZE  (16777216ULL)
+
+/* pfn type: 64-bit on all architectures to aid avoiding a compat ABI */
+typedef uint64_t argo_pfn_t;

As you always use 64-bit, can we just use an address? This would make
the ABI agnostic to the hypervisor page granularity.

Thanks for reviewing this series.

I'm not sure yet that switching to using addresses instead would be
for the best, so have been working through some reasoning about your
suggestion. This interface is for the guest to identify to the
hypervisor the list of frames of memory to use as the ring, and the
purpose of a frame number is to uniquely identify a frame. Frame
numbers, as opposed to addresses, are going to remain the same across
all processors, independent of the page tables that happen to
currently be in use.

Sorry I wasn't clear enough about the address. By address I meant guest physical address (and not guest virtual address).

guest virtual address would indeed be a pretty bad idea as you can't promise the
address would stay mapped forever. For a matter of fact, we already see some issues because of (K)PTI.


Where possible, translation should be performed by the guest rather
than the hypervisor, minimizing the hypervisor logic (good for several
reasons) - so it would be better to avoid adding the
address-to-page-number walk and granularity handling in the hypervisor
here. In this case, the guest has the incentive to do that work, given
that it wants to register the ring.

(Slightly out of scope, but hopefully not for long: We have a
near-term interest in using argo to communicate between VMs at
different levels of nesting in L0/L1 nested hypervisors, and I suspect
that frame number translation will end up being easier to handle
across L0/L1 than translation of guest addresses in a VM running at
the other level.)

Could you give a specific scenario you have in mind that is prompting a concern?

Arm processors may support multiple page granularity (4KB, 16KB, 64KB). The software is allowed to use different granularity at different level. This means that the hypervisor could use 4KB page while the guest kernel would use 64KB page (and vice versa). Some distros made the choice to only support one type of page granularity (i.e 64KB for REHL, 4KB for Debian...).

At the moment the hypercall interface is based on the hypervisor page granularity. Because Xen has always supported 4KB page-granularity, this assumption was also hardcoded in the kernel.

What prevent us to get 64KB page support in Xen (and therefore support for 52-bit address) is the hypercall ABI. If you upgrade Xen to 64KB then the hypercall interface would defact use 64KB frame. This would break any current guest. It is also not possible to keep 4KB pages everywhere because you can only map 64KB in Xen. So you may map a bit too much from another guest.

This makes me think that the frame is probably not the best in that situation. Instead a pair of address/size would be more suitable.

The problem is much larger than this series. But I thought I would attempt to convince the community using guest physical address over guest frame address whenever it is possible.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.