This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] [PATCH] properly daemonize vncviewer

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH] properly daemonize vncviewer
From: "Charles Coffing" <ccoffing@xxxxxxxxxx>
Date: Tue, 01 Aug 2006 15:11:08 -0400
Delivery-date: Tue, 01 Aug 2006 12:11:43 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

Currently, vncviewer is spawned from xm, but it doesn't properly get
daemonized.  The attached patch makes vncviewer run completely separate
from xm.

There are various reasons it should be daemonized, but the particular
problem we hit was that YaST called "xm create" and waited on output on
stdout/stderr; xm then spawned vncviewer (which never closed its
inherited stdout and stderr); xm then would exit, but YaST still had
open file descriptors, and therefore waited forever.  It would be
possible to work around in YaST, but it seemed cleaner to daemonize

We've been running with a variant of this patch (a variant because we
use tightvnc, which requires different arguments) for many months, and
it works well.

Please consider applying to xen-unstable.


Signed-off-by:  Charles Coffing <ccoffing@xxxxxxxxxx>

Attachment: xen-daemonize-vncviewer.diff
Description: Binary data

Xen-devel mailing list
<Prev in Thread] Current Thread [Next in Thread>