WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: Distro kernel and 'virtualization server' vs. 'server that sometimes

To: Luke S Crawford <lsc@xxxxxxxxx>
Subject: Re: Distro kernel and 'virtualization server' vs. 'server that sometimes runs virtual instances' rant (was: Re: [Xen-devel] Re: [GIT PULL] Xen APIC hooks (with io_apic_ops))
From: Tim Post <echo@xxxxxxxxxxxx>
Date: Mon, 01 Jun 2009 00:44:49 +0800
Cc: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sun, 31 May 2009 09:47:06 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <m3skimz4lt.fsf@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <f4e0b3ad-574b-49b4-b4fa-0d19c3394ce2@default> <m3skimz4lt.fsf@xxxxxxxxxxxxxxxxxx>
Reply-to: echo@xxxxxxxxxxxx
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Sat, 2009-05-30 at 17:02 -0400, Luke S Crawford wrote:

> I keep saying, Pagecache is not idle ram.   Pagecache is essential to the
> perception of acceptable system performance.  I've tried selling service
> (on 10K fibre disk, no less) with shared pagecache, and by all reasonable
> standards, performance was unacceptable.

I've never seen automatic overcommitment work out in a way that everyone
was happy in the hosting industry. You are 100% correct, by default
Linux is like pac man gobbling up blocks for cache.

However, this is partly because even most well written services and
applications neglect to advise the kernel to do anything different.
posix_madvise() and posix_fadvise() do not see the light of day nearly
as often as they should. Are you parsing some m4 generated configuration
file that's just under or north of the system page size? You'd then want
to tell the kernel "Hey, I only need this once .. " prior to even
talking to read(). Yet I see people going hog wild with O_DIRECT because
they think its supposed to make things faster. 

On enterprise systems (i.e. not hosting web sites and databases that are
created by others and uploaded), this is less of a hassle and a bit
easier to manage. You _know_ better than to make 1500 static HTML pages
360K long each and put them where Google can access them. You _know_
better than to mix services that allocate 20x more than they actually
need on the same host. You're able to adjust your swappiness on a whole
group of domains instantly from a central place. Finally, your able to
patch your services so they better suit your goals.

What Dan is describing is very useful, but not to IAAS providers. Like I
said before, I would not flip a switch to AUTO on any server that is
providing the use of a VM to a customer. However , customers do get
e-mails saying "You bought 1 GB, on average this month you've used only
xxx (detail averages sampled through /proc and sysinfo()) you may wish
to switch to a cheaper plan". Sound nuts? It actually makes more money,
because our density per server goes up quite a bit.

So in a large way, I think Dan is correct. If a client bought the use of
memory and barely uses it, I'd rather give them a discount for giving
some back, enabling me to set up another domain on that node. But don't
get me wrong, I'd never dream of doing that 'automagically' :)

Cheers,
--Tim



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>