WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] vcpu pin weight considerstion TAKE3

To: Atsushi SAKAI <sakaia@xxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] vcpu pin weight considerstion TAKE3
From: Emmanuel Ackaouy <ackaouy@xxxxxxxxx>
Date: Tue, 10 Jul 2007 13:46:13 +0200
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 10 Jul 2007 04:44:04 -0700
Dkim-signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:in-reply-to:references:mime-version:content-type:message-id:content-transfer-encoding:cc:from:subject:date:to:x-mailer; b=IfH95UX/vKNNDoJfXppPY0W5wyp21ry5Ku0unKr1LtjAY6W8tP4eDiGjmYFqek4guYOxGtCoQMlSmROPqV/HOu6b6mGxozX2mgaWd/1m6okGGr7/DCSzplKe99ggUCjo5Hg6cH+ivmMEMKdytqWVPertRXjF/0nkwKfZUuKxuuc=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:in-reply-to:references:mime-version:content-type:message-id:content-transfer-encoding:cc:from:subject:date:to:x-mailer; b=GjW1jaf+yHsAmFxDmUlmr2fM8jg5YSONtU4POxEg1/K5uX6Qx5zBSuL1s+XcDjr+qZ6OiQuJVr+5ojXRFG8ozQuFeTBgp0OSdQuqqY/Q/W3p5zB+UnR57DUm/KbVppyIk5IIv1itvdzL2Smtf1HWeQvtIpYRPcQ0TKs8ldbzPVw=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200707100847.l6A8lEEZ029160@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <200707100847.l6A8lEEZ029160@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi.

Before I discuss the approach again, let me first comment on your
patch:

I don't understand why you need to keep track of the subset of
active VCPUs which are online. An active VCPU is one which
has recently consumed credits. I don't understand the point of
ignoring those which are currently blocked when you look. Can
you explain?


Now to step back a little and look at the approach itself:

Weights are useful to proportionally allocate credits to domains
which are competing for the _same_(!!!) CPU resources. When
you restrict where a VCPU can run, you break this assumption.

You are attempting to divide the total weight among different
"zones" by assigning a restricted VCPU's share of the total
weight to the physical CPU where it last ran. This will work
as long as your distinct "zones" do not overlap. When they
do overlap, this will not work.

To be specific, in summing_vcpupin_weight(), you assign a
restricted VCPU's share of the total weight to the CPU it last
ran on. This assumes that all other competing VCPUs will
also sum over that physical CPU. This isn't true when distinct
CPU affinity masks ("zones") can overlap.

When all distinct CPU affinity masks do not overlap with each
other, the host system has essentially been partitioned.

Partitioning is an interesting subset of the general problem
because:
- It is easily defined
- it has a simple and elegant solution.

Your solution only works when the system has been properly
partitioned. This is fine. Actually, the problem of handling
overlapping CPU affinity masks is not a tractable one.

What I argue is that there is a cleaner approach to deal with
the partitioning of the host: For the most part, you only have
to allow multiple csched_privates structures to co-exist. Each
such group will have a master CPU doing its accounting just
like today. The accounting code will be left mostly untouched:
You probably just need to AND your group's assigned CPUs
to the online mask here and there.

Keeping the accounting work the same and distributing it
across CPUs is more desirable than adding complexity while
still keeping it on a single CPU.

Personally I think Xen would also benefit from having a clean
interface for managing CPU partitions but this isn't strictly
necessary.

Unfortunately, having left XenSource some months ago, I
have virtually no time to spend on Xen. I would love to help
review patches that add partitioning support the way I have
described: By allowing the co-existence of independent
scheduling "zones", each represented by its distinct version
of the current csched_private structure. I appreciate the work
you are doing trying to tackle this problem but I think your
patches are the wrong way forward. Given that, I'm not going
to be able to spend time looking at further re-spins of this
patch.

Cheers,
Emmanuel.

On Jul 10, 2007, at 10:47, Atsushi SAKAI wrote:

Hi, Keir and Emmanuel

This patch intends to correct the weight treatment for vcpu-pin.
Would you give me a comment on this?

sched_credit.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 114 insertions(+), 1 deletion(-)


Current algorithm is
1)calculate pcpu_weight based on all active vcpu.
vcpu_weight = sdom->weight/sdom->online_vcpu_count(newly added variable)
  v->processor
  (This routine runs when vcpu count is changed, not every 30msec.)

2)calulate vcpupin_factor based on
avarage of vcpu-pinned-pcpu_weight/(csched_priv.weight/num_online_cpu())

3)consider above factor when credit is added in vcpu-pin case.
  at atomic_add(credit_fair, &svc->credit);

Signed-off-by: Atsushi SAKAI <sakaia@xxxxxxxxxxxxxx>

Difference from previous patch(+365 lines) is

1)credit_balance consideration factor is omitted (about -150 lines)
2)detailed pcpu_weight calculation is changed (about -100 lines)
  (currently uses v->processor instead of vcpu->cpu_affinity)

Thanks
Atsushi SAKAI

<vcpupin0710.patch>_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>