|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] [PATCH] scrub pages on guest termination
Yes, sorry - should have removed our terminology from the description.
Node=physical machine
VS=HVM guest w/ pv-on-hvm drivers
Looking back at the original bug report - it seems to indicate it was
migrating from a system with 2 processors to one with 8
Specifcally - from
Dell Precision WorkStation 380
Processor: Intel(R) Pentium(R) D CPU 2.80GHz
# of CPUs: 2
Speed: 2.8GHz
to
Supermicro X7DB8
Processor: Genuine Intel(R) CPU @ 2.13GHz
# of CPUs: 8
Speed: 2.133 GHz
Keir Fraser wrote:
The
aim of the loop was to scrub enough pages in a batch that lock
contention is kept tolerably low. Even if 16 pages is not sufficient
for that, I’m surprised a ‘node’ (you mean a whole system, presumably?)
would appear to lock up. Maybe pages would be scrubbed slower than we’d
like, but still CPUs should be able to get the spinlock often enough to
evaluate whether they have spent 1ms in the loop and hence get out of
there.
What sort of system were you seeing the lockup on? Does it have very
many physical CPUs?
-- Keir
On 23/5/08 16:00, "Ben Guthro" <bguthro@xxxxxxxxxxxxxxx> wrote:
This
patch solves the following problem. When a large VS terminates, the
node locks
up. The node locks up because the page_scrub_kick routine sends a
softirq to
all processors instructing them to run the page scrub code. There they
interfere
with each other as they serialize behind the page_scrub_lock.
|
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|