[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel][RFC]degradation on IPF due to hypercall set irq


  • To: "Keir Fraser" <keir@xxxxxxxxxxxxx>
  • From: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
  • Date: Wed, 22 Nov 2006 18:23:34 +0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 22 Nov 2006 02:23:42 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AccN5wYmUmir46SdT8a+ptVHeNZ8fgAHbs2mAACOJaAAASp7iwAAG0VmAAB2zNAAAmxzrgAAEGDgAAAZWmIAAC5KoAAAk0fTAAD34HA=
  • Thread-topic: [Xen-devel][RFC]degradation on IPF due to hypercall set irq

Keir Fraser write on 2006年11月22日 17:48:
> On 22/11/06 09:38, "Xu, Anthony" <anthony.xu@xxxxxxxxx> wrote:
> 
> Did the IDE code really need to made multithreaded? 
This code was added about one year ago, the purpose is definitely to improve
IDE performance. I don't have the performance data.
We can image,
If dom0 and hvm domain are running on different CPU,
It will improve parallel of hvm domain and qemu IDE device.


>I suppose it's a
> better model for the stub domain plans...  Anyway, it's a pain here
Maybe we can let performance data to decide this.


> because it will require the shadow wire bitmap to be updated with
> atomic accesses and the multicall state to be per-thread or to be
> protected with a mutex. Each thread should flush multicall state
> before it blocks. 

I prefer atomic access, we used it in shared PIC.
If each thread flush multicall seperately,
There are some extra hypercalls.


-- Anthony

> 
>  -- Keir

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.