This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-users] Pthreads Overhead

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Pthreads Overhead
From: Bent Masriya <bentmasriya@xxxxxxxxx>
Date: Fri, 23 Oct 2009 14:59:55 -0400
Delivery-date: Fri, 23 Oct 2009 12:00:50 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type; bh=ERhq/+rjKQzCrvZeRB5ZdpunSHGnmIJrr+ROtvTqbUA=; b=UxLcYXtWzyLyYBfCNHRx1P/TL6fxUhbNektzj3vsUm5eoI0pzhDUj8UaVMUq9cqS3E IKaLVfDfV5Zzejo4FCiU6ljQMSUveMI8NfjWmNcPIJ3uH3aDIcMij3boToBHnYC7OWcj wNjbXZMKLJD8z6zOX+9kt8Gg1BTSTtGxeJAAs=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=ogl5sgON8g4AUM7kyS+Ggx4hKf0hyE/0PYBbT56MQjd+vkBXoDw+wUl2popPD3OE9t 2YYCsVFP9CYVhbCv1vX8kD2A5EeRWOHHtRt1h0GzsEhU2bI7Les6iAKghYVV7aOotTjJ dfmubiY42x1nTBad8+eXW/ogqJeA43ptzMBl8=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hi all,
I have been seeing a considerable (2x-3x) overhead when using Posix pthreads and pthreads mutux synchronization on Xen dom0. I am not sure if this is an inherent overhead of Xen or a wrong configuration on my side.

Basically,  I run a simple benchmark to time pthread_create and pthread_mutex_lock/unlock for 200 threads and the overhead I am measuring for xen dom0 is  1.5x-2.9x relative to the non-xenified (native) performance. Can someone please confirm if they have seen similar overhead and/or shed some light on the source of this overhead

I am using 2.6.26-2-xen-amd64Xen with eight VCPU, and 8 physical cores/ 2 sockets and comparing it to Linux SMP on the same machine.  This is Xen 3.2 with a credit scheduler. I am seeing this overhead for both HVM and paravirtualized Xen. Please, advise.
Xen-users mailing list
<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-users] Pthreads Overhead, Bent Masriya <=