WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] xen: core dom0 support

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] xen: core dom0 support
From: Ingo Molnar <mingo@xxxxxxx>
Date: Sun, 8 Mar 2009 12:01:50 +0100
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, the arch/x86 maintainers <x86@xxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>
Delivery-date: Sun, 08 Mar 2009 04:03:07 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <49B23907.8030103@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1235786365-17744-1-git-send-email-jeremy@xxxxxxxx> <20090227212812.26d02f34.akpm@xxxxxxxxxxxxxxxxxxxx> <20090228084254.GA29342@xxxxxxx> <49A907DD.6010408@xxxxxxxx> <20090302120859.GB29015@xxxxxxx> <49B23907.8030103@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.18 (2008-05-17)
* Jeremy Fitzhardinge <jeremy@xxxxxxxx> wrote:

> Ingo Molnar wrote:
>> Have i missed a mail of yours perhaps? I dont have any track of you 
>> having posted mmap-perf perfcounters results. I grepped my mbox and the 
>> last mail i saw from you containing the string "mmap-perf" is from 
>> January 20, and it only includes my numbers.
>
>
> Yes, I think you must have missed a mail. I've attached it for 
> reference, along with a more complete set of measurements I 
> made regarding the series of patches applied (series ending at 
> 1f4f931501e9270c156d05ee76b7b872de486304) to improve pvops 
> performance.

Yeah - indeed i missed those numbers - they were embedded in a 
spreadsheet document attached to the mail ;)

> My results showed a dramatic drop in cache references (from 
> about 300% pvop vs non-pvop, down to 125% with the full set of 
> patches applied), but it didn't seem to make much of an effect 
> on the overall wallclock time. I'm a bit sceptical of the 
> numbers here because, while each run's passes are fairly 
> consistent, booting and remeasuring seemed to cause larger 
> variations than we're looking at. It would be easy to handwave 
> it away with "cache effects", but its not very satisfying.

Well it's the L2 cache references which are being measured here, 
and the L2 cache is likely very large on your test-system. So we 
can easily run into associativity limits in the L1 cache while 
still being mostly in L2 cache otherwise.

Associativity effects do depend on the kernel image layout and 
on the precise allocations of kernel data structure allocations 
we do during bootup - and they dont really change after that.

> I also didn't find the measurements very convincing because 
> the number of CPU cycles and instructions executed count is 
> effectively unchanged (ie, the baseline non-pvops vs original 
> pvops apparently execute exactly the same number of 
> instructions, but we know that there's a lot more going on), 
> and with no change as each added patch definitely removes some 
> amount of pvops overhead in terms of instructions in the 
> instruction stream. Is it just measuring usermode stats? I ran 
> it as root, with the command line you suggested ("./perfstat 
> -e -5,-4,-3,0,1,2,3 ./mmap-perf 1"). Cache misses wandered up 
> and down in a fairly non-intuitive way as well.

It's measuring kernel stats too - and i very much saw the 
instruction count change to the tune of 10% or so.

> I'll do a rerun comparing current tip.git pvops vs non-pvops 
> to see if I can get some better results.

Thanks - i'll also try your patch on the same system i measured 
for my numbers so we'll have some comparison.

        Ingo

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>