[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] xen/arm: Convert runstate address during hypcall



On 12/06/2020 15:13, Bertrand Marquis wrote:
Hi Julien,

Hi,

On 12 Jun 2020, at 11:53, Julien Grall <julien@xxxxxxx> wrote:
The code below suggests the hypercall will just fail if you call it from a 
different vCPU. Is that correct?

No the hypercall will work, but the area in this case won’t be updated.
When the hypercall is called on the current vcpu, the area will be updated when 
the context will be restored before returning to the guest so there is no need 
to do it an extra time in the hypercall itself.

I am afraid this is not correct, update_runstate_area() is only called when context switching between 2 vCPUs. So the only way this could be called on return from hypercall is if the vCPU get preempted.

When this is done for a different vcpu the current code is not updating the 
area anymore.

I did think at first to do it but the complexity it was adding (mainly due to 
the dual page case) was very high and I could not really think of a use case or 
find one in Linux.

You may want to have an updated view without forcing the vCPU to exit to the hypervisor and do a context switch.

But now that I think of it I could, in that specific case, use a copy_to_guest.

I think you should be able to call update_runstate_area() directly.


Do you think this is something which would make sense to restore ?

I would like to avoid diverging too much from the current definition of the hypercall. In this case, I don't think the restriction you add is necessary.


+    if ( !page )
+    {
+        spin_unlock(&v->arch.runstate_guest.lock);
+        gprintk(XENLOG_DEBUG, "Runstate pointer is not mapped\n");
+        return -EINVAL;
      }
  -    __copy_to_guest(runstate_guest(v), &runstate, 1);
+    v->arch.runstate_guest.page = page;
+    v->arch.runstate_guest.offset = offset;
+

In the current implementation, the runstate area was update with the latest 
information during the hypercall. This doesn't seem to happen anymore. Is there 
any specific reason?

As explained before, this is still happening on the current vcpu as a result of 
the context switch back to the guest at the end of the hypercall.

See above, I don't believe this is correct. :).

+    spin_unlock(&v->arch.runstate_guest.lock);
+
+    return 0;
+}
+
+
+/* Update per-VCPU guest runstate shared memory area (if registered). */
+static void update_runstate_area(struct vcpu *v)
+{
+    struct vcpu_runstate_info *guest_runstate;
+    void *p;
+
+    spin_lock(&v->arch.runstate_guest.lock);
  -    if ( guest_handle )
+    if ( v->arch.runstate_guest.page )
      {
-        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+        p = __map_domain_page(v->arch.runstate_guest.page);
+        guest_runstate = p + v->arch.runstate_guest.offset;
+
+        if ( VM_ASSIST(v->domain, runstate_update_flag) )
+        {
+            v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
+            guest_runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
+            smp_wmb();
+        }
+
+        memcpy(guest_runstate, &v->runstate, sizeof(v->runstate));
          smp_wmb();
-        __raw_copy_to_guest(guest_handle,
-                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
+
+        if ( VM_ASSIST(v->domain, runstate_update_flag) )
+        {
+            v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+            guest_runstate->state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+            smp_wmb();

Why do you need this barrier?

As the bit is supposed to be used to wait for an update to be done, I felt it 
was better to push this out as soon as possible.

smp_wmb() only ensure that any write before the barrier will be seen before after write after the barrier. It doesn't guarantee the other end will see it right after it...

As there is already one after the memcpy, the cost should be fairly limited 
here.

... so this feel like more a micro-optimization (happy to be proved wrong). The memory barriers are already tricky enough to use/understand, so I would rather not want to use more than necessary.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.