WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Bunching of hypercalls/Xenbus

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Bunching of hypercalls/Xenbus
From: Peter Teoh <htmldeveloper@xxxxxxxxx>
Date: Sun, 26 Aug 2007 08:16:21 +0800
Delivery-date: Thu, 30 Aug 2007 02:58:08 -0700
Dkim-signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:to:subject:date:mime-version:content-type:x-priority:x-msmail-priority:x-mailer:x-mimeole:from; b=WmjbXsXtB/bFN8YZ/29hzDGczHSH1HyJ1bVDrpGzg3LdeBz+DO+tFPeSY6mQ64Fb0J/B2OY2zjcwWUNgbyvOiM+Cyd6KlcJ/2PIv6PlPdOyjdi54npvZUEcc2KnDUQw+HLyUq3r0NBD3zxwem7dChueIY9DFRPnXnhlfN/ZNYas=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:to:subject:date:mime-version:content-type:x-priority:x-msmail-priority:x-mailer:x-mimeole:from; b=FirHnj/q4KUfceUC78+9Zfd0aMOg3M6WhQlQBKQ7va/wCzyU+wXxDWq9k/0UF7Spr8hkzmNlADGIdK2h4vIto89ZSQzyHBOIPMFUOVeHY4WUzwSl0H8x77enNmP/Zwb+2egg9iYMOqyQ7H8h0KjZF4MUf859hNbOpEld1tXNHOg=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Apologies for the new questions - please enlighten me.
 
In the traditional Linux kernels, we have the delayed I/O concept to improvement performance.   The disk block I/O request is bunched together whenever possible with the previous block request, for performance reason.
 
Analogously, due to the large overheads in making hypercalls, is it possible that we can do this?   Ie, the core instructions will still be executed in its original order, but the overheads of making the multiple VM exit/entry are all bunched together and executed once.   The hypercalls necessarily are coming from different CPU, right?   Can further improvement be made by removing the atomicity requirements of the CPU instruction that triggered off the VM exit condition? (sometimes at least?)   If so then it may be possible to bunch together hypercalls from the same CPU as well.
 
Similarly, for the Xenbus state transition machine - can we improve performance in inter-domain communication - but not necessarily satisfying all Xenbus request immediately all the time?   Can it be done both for PV and HVM scenario?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] Bunching of hypercalls/Xenbus, Peter Teoh <=