[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel][PV-ops][PATCH] Netback: Fix PV network issue for netback multiple threads patchset



On Thu, 2010-06-17 at 09:16 +0100, Xu, Dongxiao wrote:
> Ian,
> 
> Sorry for the late response, I was on vacation days before.

I was also on vacation so sorry in _my_ late reply ;-)

> Ian Campbell wrote:
> > On Thu, 2010-06-10 at 12:48 +0100, Xu, Dongxiao wrote:
> >> Hi Jeremy,
> >> 
> >> The attached patch should fix the PV network issue after applying
> >> the netback multiple threads patchset. 
> > 
> > Thanks for this Donxiao. Do you think this crash was a potential
> > symptom 
> > of this issue? It does seem to go away if I apply your patch.
> 
> Actually, the phenomenon is the same on my side without the fixing patch.

Great, thanks.

> > On an unrelated note, do you have any plans to make the number of
> > groups 
> > react dynamically to CPU hotplug? Not necessarily while there are
> > actually active VIFs (might be tricky to get right) but perhaps only
> > when netback is idle (i.e. when there are no VIFs configured), since
> > often the dynamic adjustment of VCPUs happens at start of day to
> > reduce 
> > the domain 0 VCPU allocation from the total number of cores in the
> > machine to something more manageable.
> 
> I'm sorry, currently I am busy with some other tasks and may not have
> time to do this job.

I understand.

> But if the case is to reduce dom0 VCPU number, keep the group number
> unchanged will not impact the performance, since the group reflects the
> tasklet/kthread number, and it doesn't have direct association with
> dom0's VCPU number.

Yes, that mitigates the issue to a large degree. I was just concerned
about e.g. 64 threads competing for 4VCPU or similar which seems
wasteful in terms of some resource or other...

For XCP (which may soon switch from 1 to 4 domain 0 VCPUS in the
unstable branch) I've been thinking of the following patch. I wonder if
it might make sense in general? 4 is rather arbitrarily chosen but I
think even on a 64 core machine you wouldn't want to dedicate more than
some fraction netback activity and if you do then it is configurable.

Ian.


netback: allow configuration of maximum number of groups to use
limit to 4 by default.

Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

diff -r 7692c6381e1a drivers/xen/netback/netback.c
--- a/drivers/xen/netback/netback.c     Fri Jun 11 08:44:25 2010 +0100
+++ b/drivers/xen/netback/netback.c     Fri Jun 11 09:31:48 2010 +0100
@@ -124,6 +124,10 @@
 static int MODPARM_netback_kthread = 1;
 module_param_named(netback_kthread, MODPARM_netback_kthread, bool, 0);
 MODULE_PARM_DESC(netback_kthread, "Use kernel thread to replace tasklet");
+
+static unsigned int MODPARM_netback_max_groups = 4;
+module_param_named(netback_max_groups, MODPARM_netback_max_groups, bool, 0);
+MODULE_PARM_DESC(netback_max_groups, "Maximum number of netback groups to 
allocate");
 
 /*
  * Netback bottom half handler.
@@ -1748,7 +1752,7 @@
        if (!is_running_on_xen())
                return -ENODEV;
 
-       xen_netbk_group_nr = num_online_cpus();
+       xen_netbk_group_nr = min(num_online_cpus(), MODPARM_netback_max_groups);
        xen_netbk = (struct xen_netbk *)vmalloc(sizeof(struct xen_netbk) *
                                            xen_netbk_group_nr);
        if (!xen_netbk) {




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.