[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 8/9] arm/mem_access: Add short-descriptor based gpt





On 06/23/2017 08:09 PM, Sergej Proskurin wrote:
Hi Julien,

Hi Sergej,


[...]


Looking at the code, I see very limited point of having the offsets
array as you don't use a loop and also use each offset in a single place.

+        ((paddr_t)(gva >> 20) & ((1ULL << (12 - n)) - 1)),

Don't you think it is more readable to have the GVA offsets at one
place. Also, the functionality is in this way similar to the one shown
in guest_walk_ld. In my opinion, it is more intuitive to have both
functions work in a similar way. I suggest keeping the array, however
using GENMASK to generate it (as you mentioned in your comments below).

I disagree here. The way to lookup short-descriptor and long-descriptor are very different, for instance you don't have a loop as each level handle the offset differently. See the way you play will the first level table offsets.

+    paddr = (ttbr & ~mask) | (offsets[level] << 2);

[...]

+    mask = ((1ULL << 10) - 1);
+    pte = table[offsets[level] & mask];
+

If you find a way to avoiding playing with the offsets another time, then maybe it is worth it.


const vaddr_t offsets[2] = {
     (gva & GENMASK((31 - n), 20)) >> 20,
     (gva & GENMASK(19, 12)) >> 12,
};


Furthermore, it would be clearer if you first mask then shift. As it
helps to understand the code.

If you use GENMASK (or GENMASK_ULL if you don't extend GENMASK), this
will make everything more obvious:


[...]


+
+    /*
+     * As we have considered up to 2 MSBs of the GVA for mapping the
first
+     * level translation table, we need to make sure that we limit
the table
+     * offset that is is indexed by GVA<31-n:20> to max 10 bits to avoid
+     * exceeding the page size limit.
+     */
+    mask = ((1ULL << 10) - 1);
+    pte = table[offsets[level] & mask];

This looks quite complex for just reading a pte. I think it would be
worth to leverage the vgic_guest_access_memory for that (same in LPAE).
It would also add safety if the offsets the table is mistakenly computed
(from the current code, I can't convince myself the offset will always
be right).

As far as I understand, we would still need to use the same offsets even
if we used vgic_access_guest_memory. Also, the only significant
difference between my implementation and vgic_access_guest_memory is
that gic_access_guest_memory checks whether we write over page
boundaries, which is quite hard to achieve as the offsets are limited in
number so that they don't cross page boundaries.

+        /* Make sure that we consider the bits ttbr<12:14-n> if n > 2. */
+        mask = ((1ULL << 12) - 1) & ~((1ULL << (14 - n)) - 1);
+ table = (short_desc_t *)((unsigned long)table | (unsigned long)(ttbr & mask));

[...]

+    mask = ((1ULL << 10) - 1);
+    pte = table[offsets[level] & mask];

Are you able to prove me this will never cross a page boundary? Personally, when I read that I cannot convince myself that this will never cross happen. This is guest memory mapped, so if there is any coding error, you may access unmapped memory and then DOS Xen. Not very nice :).

We have a function that read/write into guest memory with all safety check associated to it. We should take advantage of any helpers that will mitigate any potential miscalculations as you would just the wrong IPA. It is better than a DOS and also avoid open-coding so it is much easier to update any change in the way we access the guest memory.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.