[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] Per vcpu IO evtchn patch for HVM domain

  • To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
  • From: "Li, Xin B" <xin.b.li@xxxxxxxxx>
  • Date: Thu, 23 Feb 2006 05:38:39 +0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 22 Feb 2006 21:39:23 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcY34GdxeSsZ78rnTjWtYDygLHNSWAAE2EEw
  • Thread-topic: [Xen-devel] [PATCH] Per vcpu IO evtchn patch for HVM domain

>> Per vcpu IO evtchn patch for HVM domain.
>> We are starting to send patches to support SMP VMX guest, 
>for SVM side,
>> should have a test to see if this patch breaks anything there.
>Can you explain the bind_interdomain logic? Looks as though both the 
>device model *and* Xen are doing bind_interdomain now? I'd 
>prefer to do 
>it just in the device model, especially since you had to punch a hole 
>through to evtchn_bind_vcpu() to be able to do it within Xen!

For the bind_interdomain logic, I think it should be almost the same as
the current 2 steps binding, no xen hypervisor code changed for this.
1) the current code allocates an *unbound* port from VMX domain in
python code (image.py), which in turn calls xc_hvm_build with this port
parameter very soon. And my patch just moves this allocation to
xc_hvm_build. Now it's no need to pass the port parameter, or we need
pass an array of unbound ports for each vcpu to xc_hvm_build.
2) And the logic in device model actually is almost the same, to bind
the previously allocated unbound port to a dom0 port, my patch changes
it to a loop for binding for each vcpu.

Bind_interdomain binds a port to vcpu0 by default, to notify different
vcpu of VMX domain in device model, seems I have to call
evtchn_bind_vcpu in vmx_do_launch if only use the current event channle
interface. Any comments?

Did I really understand your question?


>  -- Keir
>Xen-devel mailing list

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.