[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH V2] xen/gnttab: Store frame GFN in struct page_info on Arm




On 16.09.21 09:38, Jan Beulich wrote:

Hi Jan

On 16.09.2021 00:13, Oleksandr wrote:
On 15.09.21 13:06, Jan Beulich wrote:
On 14.09.2021 22:44, Oleksandr Tyshchenko wrote:
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -98,9 +98,18 @@ struct page_info
   #define PGT_writable_page PG_mask(1, 1)  /* has writable mappings?         */
   #define PGT_type_mask     PG_mask(1, 1)  /* Bits 31 or 63.                 */
- /* Count of uses of this frame as its current type. */
-#define PGT_count_width   PG_shift(2)
-#define PGT_count_mask    ((1UL<<PGT_count_width)-1)
+ /* 3-bit count of uses of this frame as its current type. */
+#define PGT_count_base    PG_shift(4)
+#define PGT_count_mask    PG_mask(7, 4)
+
+/*
+ * Stored in bits [27:0] or [59:0] GFN if page is used for grant table frame.
I don't know enough Arm details to tell whether this is properly
one bit more than the maximum number of physical address bits.
Without the extra bit you wouldn't be able to tell apart a
guest specified GFN matching the value of PGT_INVALID_FRAME_GFN
from an entry which was set from INVALID_GFN.
Really good point.

1. On Arm64 the p2m_ipa_bits could (theoretically) be 64-bit which, I
assume, corresponds to the maximum guest physical address (1 << 64) - 1
= 0xFFFFFFFFFFFFFFFF.
To store that GFN we need 52-bit. But, we provide 60-bit field which is
more than enough, I think. Practically, the maximum supported
p2m_ipa_bits is 48-bit, so the maximum supported GFN will occupy 36-bit
only. Everything is ok here.
2. On Arm32 the p2m_ipa_bits is 40-bit which, I assume, corresponds to
the maximum guest physical address (1 << 40) - 1 = 0xFFFFFFFFFF. To
store that GFN we need 28-bit. If I did the calculation correctly, what
we have on Arm32 is that PGT_INVALID_FRAME_GFN == maximum guest physical
address and it looks like we need and extra bit on Arm32. Do you think
we need to borrow one more bit from the count portion to stay on the
safe side?
I think so, unless there are further restrictions on the GFN range
that I'm unaware of.

ok, thank you.



For 64-bit, if you only need 52 bits, why do you make the field 60
bits wide? I'd recommend against "wasting" bits. Better keep the
count field as wide as possible.
Well, the reason almost the same as I already provided for why not using PG_mask for PGT_gfn_mask. Although we waste some bits on Arm64, having the same amount of bits for count on Arm32 and Arm64 let us avoid introducing an extra #ifdef to that header (if we go with maximum possible bits for count on each configuration we would need to have a set of PGT_count_*/PGT_gfn_*). I was thinking that if there was indeed a lack of bits for count then an extra #ifdef wouldn't be an argument at all. If I am not mistaken, the code which deals with count (where only 1 bit is used) is one and the same on both Arm32 and Arm64, so what is the point of diverging here (I mean to provide more bits for count on Arm64 because simply there is a reserve). But, if the reason I provided is weak I will update header to keep the count as wide as possible.



+ * This only valid for the xenheap pages.
+ */
+#define PGT_gfn_width     PG_shift(4)
+#define PGT_gfn_mask      ((1UL<<PGT_gfn_width)-1)
Any reason you don't use PG_mask() here? Any open-coding is prone
to people later making mistakes.
I failed to come up with idea how to do that without #ifdef. As GFN
starts at bit 0 different first parameter would be needed for PG_mask on
32-bit and 64-bit systems.
I wonder whether PGC_count_mask/PGT_count_mask are open-coded on Arm/x86
because of the same reason.
Hmm, that pre-existing open-coding isn't nice, but is perhaps a
good enough reason to keep what you have. (Personally I wouldn't
be afraid of adding an #ifdef here, but I realize views there may
differ.)

Jan

--
Regards,

Oleksandr Tyshchenko




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.