WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel][Pv-ops][PATCH] Netback multiple tasklet support

To: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel][Pv-ops][PATCH] Netback multiple tasklet support
From: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>
Date: Sat, 28 Nov 2009 13:15:04 +0000
Accept-language: en-US
Acceptlanguage: en-US
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>
Delivery-date: Sat, 28 Nov 2009 05:19:30 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <EADF0A36011179459010BDF5142A457501D006BBAC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <EADF0A36011179459010BDF5142A457501D006B913@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4FA716B1526C7C4DB0375C6DADBC4EA342A7A7E951@xxxxxxxxxxxxxxxxxxxxxxxxx> <EADF0A36011179459010BDF5142A457501D006BBAC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcpvCRTacBm7g/TlQ5GetSSm6xA1EAAc2xzgAAA5DEAAK6iaMA==
Thread-topic: [Xen-devel][Pv-ops][PATCH] Netback multiple tasklet support
> The domain lock is in grant_op hypercall. If the multiple tasklets are 
> fighting
> with each other for this big domain lock, it would become a bottleneck and
> hurt the performance.
> Our test system has 16 LP in total, so we have 16 vcpus in dom0 by
> default.
> 10 of them are used to handle the network load. For our test case, dom0's
> totalvcpu utilization is  ~461.64%,  so each vcpu ocupies ~46%.

Having 10 VCPUs for dom0 doesn't seem like a good idea -- it really oughtn't to 
need that many CPUs to handle IO load. Have you got any results with e.g. 2 or 
4 VCPUs?

When we switch over to using netchannel2 by default this issue should largely 
go away anyhow as the copy is not done by dom0. Have you done any tests with 
netchannel2?

> Actually the multiple tasklet in netback could already improve the the QoS of 
> the
> system, therefore I think it can also help to get better responseness for
> that vcpu.
> I think I can try to write another patch which replace the tasklet by kthread,
> because I think is a different job with the multi-tasklet netback support.
> (kthread is used to guarantee the responseness of userspace, however 
> multi-tasklet
> netback is used to remove the dom0's cpu utilization bottleneck). However I 
> am not
> sure whether the improvement in QoS by this change is needed In MP system?

Have you looked at the patch that xenserver uses to replace the tasklets by 
kthreads?

Thanks,
Ian



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel