[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Odd CPU Scheduling Behavior


  • To: "Carb, Brian A" <Brian.Carb@xxxxxxxxxx>
  • From: Emmanuel Ackaouy <ackaouy@xxxxxxxxx>
  • Date: Fri, 30 Mar 2007 12:12:29 +0200
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 30 Mar 2007 11:13:58 +0100
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:in-reply-to:references:mime-version:content-type:message-id:content-transfer-encoding:cc:from:subject:date:to:x-mailer; b=Iz8oEpqSIsrg81HpEkljUXGI9CnC8uQY8G9wlyh0ZZT/qC7mHbO0W/zN4BSr1p2oJIe9IvGHj7rlIYzJR+xsrbd5cAKAcXaYHRlKavnbdFyZdXRvfFwrZuIKYaqAaTfELXpB89XuNVwzmYIzYHT83tWU9ebgnjzd7pb5/D60T1w=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On Mar 29, 2007, at 17:57, Carb, Brian A wrote:
Emmanuel,

Yes - both vcpus are progressing, but the load gets pushed to one cpu. If I run top in interactive mode in each vm while the test is running, and monitor cpu usage (set delay to 1 and show separate cpu states), each of the vm's cpus are getting equally loaded on average.

If I get this straight, you are running 2 VMs each with 2 CPU intensive VCPUs and you're running all of this load on 2 physical CPUs. So we expect there
should be some time slicing.

Let's call the VCPUs: Vx.y where x is the VMid and y is the VCPUid.

Since there is no gang scheduling, it's just as likely that V0.0 and V0.1 will
time slice on the same physical CPU than it is for say V0.0 and V1.0.

You could try to force V0.0 and V0.1 on different physical CPUs but there
would still be no guarantee that they run at the same time (in a gang).

This is not really an oddity. What behavior would you like to see?

There are a few more oddities:

First, I see this behavior almost all the time when I run the test. However, occasionally, I do not see this behavior at all, and the load stays spread out on both cpus for the duration of the test (2 minutes).

Second, if I boot my ES7000/one to use only 4 CPUs (2 dual-core sockets), the load always stays evenly distributed on both cpus.

You said you were using cpumasks to force all of your VCPUs on 2 given
physical CPUs anyway so I'm not sure I understand the difference between
booting up with 4 sockets or 2...

Once VCPUs are running somewhere, they tend to stay there so if you were
to start your test on VM0 first, it would spread it's VCPUs across two physical
CPUs. Then, when you start VM1, one VCPU migration would happen that
would cause each physical CPU to host two VCPUs. It should be about a
50-50 chance for you to land V0.0 and V0.1 on the same physical CPU.

Now you're running some I/O load on the VMs so one must also wonder if
the VCPUs don't occasionally sleep, causing potential migrations to run any
queued VCPU off the neighbor CPU.

The physical CPUs are at 100% and the VCPUs don't move around at all
during your tests?


brian carb
unisys corporation - malvern, pa

-----Original Message-----
From: Emmanuel Ackaouy [mailto:ackaouy@xxxxxxxxx]
Sent: Thursday, March 29, 2007 11:42 AM
To: Carb, Brian A
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Odd CPU Scheduling Behavior

There is no gang scheduling in Xen so what you see is not unexpected.
Both VCPUs of the same VM are as likely to run on the same physical CPU than not. For each VM though, both its VCPUs should get equal CPU time if they are runnable even if they alternatively run on the same physical CPU.

I have seen some multithreaded applications/libraries back off using execution vehicles (processes) to schedule a runnable thread when it doesn't seem to make forward progress, probably because some code somewhere assumes another process is hogging the CPU and it's therefore better to lower the number of execution vehicles. In this case, multithreaded apps running on a 2CPU guest on Xen sometimes only schedule work on 1CPU when there is another VM competing for the physical CPU resources.

Are both VCPUs of each VM making forward progress during your test?

On Mar 29, 2007, at 16:58, Carb, Brian A wrote:

We're seeing a cpu scheduling behavior in Xen and we're wondering if
anyone can explain it.
 
We're running XEN 3.0.4 on a Unisys ES7000/one with 8 CPUs (4
dual-core sockets) and 32GB memory. XEN is built on SLES10, and the
system is booted with dom0_mem=512mb. We have 2 para-virtual machines,
each booted with 2 vcpus and 2GB memory, and each running SLES10 and
Apache2 with worker multi-processing modules.
 
The vcpus of dom0, vm1 and vm2 are pinned as follows:
 
dom0 is relegated to 2 vcpus (xm vcpu-set 0 2) and these are pinned to
cpus 0-1
vm1 uses 2 vcpus pinned to cpus 2-3
vm2 uses 2 vcpus pinned to cpus 2-3
 
The cpus 4 through 7 are left unused.
 
Our test runs http_load against the Apache2 web servers in the 2 vms.
Since Apache2 is using worker multi-processing modules, we expect that
each vm will spread its load over the 2 vcpus, and during the test we
have verified this using top and sar inside a vm console.
 
The odd behavior occurs when we monitor cpu usage using xenmon in
interactive mode. By pressing "c", we can observe the load on each of
the cpus. When we examine cpus 2 and 3 initially, each is used equally
by vm1 and vm2. However, shortly after we start our testing, cpu2 runs
vm1 exclusively 100% of the time, and cpu3 runs vm2 100% of the time.
When the test completes, CPUs 2 and 3 go back to sharing the load of
vm1 and vm2.
 
Is this the expected behavior?

brian carb
unisys corporation - malvern,
pa_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.