[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/7] vm-event: introduce vm_event_vcpu_enter



On 6/16/2016 11:33 PM, Razvan Cojocaru wrote:
On 06/16/16 23:10, Corneliu ZUZU wrote:
On 6/16/2016 5:51 PM, Jan Beulich wrote:
On 16.06.16 at 16:08, <czuzu@xxxxxxxxxxxxxxx> wrote:
@@ -509,6 +508,8 @@ void hvm_do_resume(struct vcpu *v)
           }
       }
   +    vm_event_vcpu_enter(v);
Why here?
Why indeed. It made sense because monitor_write_data handling was
originally there and then the plan was to move it to vm_event_vcpu_enter
(which happens in the following commit).
The question is though, why was monitor_write_data handled there in the
first place? Why was it not put e.g. in vmx_do_resume immediately after
the call to hvm_do_resume and just before
the reset_stack_and_jump...? And what happens with handling
monitor_write_data if this:

if ( !handle_hvm_io_completion(v) )
         return;

causes a return?
It's in hvm_do_resume() because, for one, that's the place that was
suggested (or at least confirmed when I've proposed it for such things)
on this list back when I wrote the code. And then it's here because
vmx_do_things()-type functions are, well, VMX, and I had hoped that by
choosing hvm-prefixed functions I'd get SVM support for free.

As for the handle_hvm_io_completion(v) return, my understanding was that
that would eventually cause another exit, and eventually we'd get to the
code below once the IO part is done.


Thanks,
Razvan

Thanks, so then indeed the vm_event_vcpu_enter call should be there to avoid wrongfully calling it many times before actually entering the vCPU (due to IO). I wonder though if anything wrong would happen if I put the call after the "inject pending hw/sw trap" part.

Corneliu.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.