[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RT Xen on ARM - R-Car series

On 2/4/19 10:28 AM, Andrii Anisov wrote:

Hi Andrii,

On 01.02.19 19:40, Julien Grall wrote:
On 01/02/2019 16:53, Roger Pau Monné wrote:
On Thu, Jan 31, 2019 at 11:14:37PM +0000, Julien Grall wrote:
On 1/31/19 9:56 PM, Stefano Stabellini wrote:
On Thu, 31 Jan 2019, Julien Grall wrote:
On 31/01/2019 12:00, Andrii Anisov wrote:
On 31.01.19 13:37, Julien Grall wrote:
So, I've got a hacky patch to 'fix' this on x86, by taking a reference
to the mfn behind the virtual address provided when setting up the
hypercall and mapping it in Xen.

That was the idea I had in mind :).
Looks interesting.

Hopefully, no guest is modifying the mapping (i.e the virtual address point to a different physical address) afterwards.
I guess, that mapping should not be moved around. Otherwise it would be broken even with the current implementation.

What I meant is the virtual address stays the same but the guest physical address may change. I don't see how this could be broken today, can you explain it?

Moreover, having that buffer mapped to XEN will reduce context switch time as a side effect.

I am still unsure whether we really want to keep that always mapped.

Each guest can support up to 128 vCPUs. So we would have 128 runstates mapped. Each runstate would take up to 2 pages. This means that each guest would require up to 1MB of vmap.

The VMAP in Xen is quite limited (1GB at most) and shared with device mapping (e.g ITS...).

On the other side, not mapping the pages contiguously is going to be a pain. So maybe the downside is worth it. It would be interesting to have the pros/cons of each solution written down in the series.

This however doesn't work on ARM due
to the lack of paging_gva_to_gfn. I guess there's something similar to
translate a guest virtual address into a gfn or a mfn?

get_page_from_gva should to the job for you.
+int map_runstate_area(struct vcpu *v,
+                      struct vcpu_register_runstate_memory_area *area)
+    unsigned long offset;
+    unsigned int i;
+    struct domain *d = v->domain;
+    size_t size =
+        has_32bit_shinfo((v)->domain) ? sizeof(*v->compat_runstate_guest) :
+                                        sizeof(*v->runstate_guest);
+    if ( v->runstate_guest )
+    {
+        return -EBUSY;
+    }
+    offset = area->addr.p & ~PAGE_MASK;
+    for ( i = 0; i < ARRAY_SIZE(v->runstate_mfn); i++ )
+    {
+        p2m_type_t t;
+        uint32_t pfec = PFEC_page_present;
+        gfn_t gfn = _gfn(paging_gva_to_gfn(v, area->addr.p, &pfec));
+        struct page_info *pg;
+        if ( gfn_eq(gfn, INVALID_GFN) )
+            return -EFAULT;
+        v->runstate_mfn[i] = get_gfn(d, gfn_x(gfn), &t);

get_gfn would need to be implemented on Arm.
I'm going to step into this, tomorrow I guess. I have to finish smth today.

I thought more about it during the week-end. I would actually not implement get_gfn but implement a function similar to get_page_from_gva on x86. The reason behind this is the function on Arm is quite complex as it caters many different use case.


Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.