WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: Distro kernel and 'virtualization server' vs. 'server that sometimes

To: Luke S Crawford <lsc@xxxxxxxxx>
Subject: RE: Distro kernel and 'virtualization server' vs. 'server that sometimes runs virtual instances' rant (was: Re: [Xen-devel] Re: [GIT PULL] Xen APIC hooks (with io_apic_ops))
From: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Date: Mon, 1 Jun 2009 11:04:13 -0700 (PDT)
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, echo@xxxxxxxxxxxx
Delivery-date: Mon, 01 Jun 2009 11:05:13 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <m3skimz4lt.fsf@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Not to beat this to death, but one more comment:

> wait what?  the difference is if you aren't using the CPU, I can take
> it away, and then give it back to you when you want it almost 
> immediately,
> with a small cost (of flushing the cpu cache, but that is fast enough
> that while it's a big deal for scientific type applications, 
> it doesn't
> really make the percieved responsiveness of the box worse, unless you
> do it a bunch of times in a small period of time.)  
> 
> Ram is different.  If I take away your pagecache, either I save it to 
> disk (slow) and restore it (slow) when I return it, or I take 
> it from you
> without saving to disk, and return clean pages when you want it back,
> meaning if you want that data you've got to re-read from disk. (slow)

You are technically correct, but I'm not talking about taking
away ALL of the pagecache.  Pagecache is a guess as to what
pages might be used in the future.  A large percentage of those
guesses are wrong and the page will never be used again
and will eventually be evicted. This is what I call "idle memory"
but I love the way Tim Post put it: "Linux is like pac man gobbling
up blocks for cache."

The right long-term answer is for Linux and OS's in general to
get smarter about giving up memory that they know is not going to
be used again, but even if they get smarter, they will never be
omniscient.

So self-ballooning creates pressure on the page cache, making
the OS evict pages that its not so sure about.  Then tmem acts
as a backup for those pages; if the OS was wrong and the page
is needed again (soon), it can get it right back without a disk
read.

Clearly this won't help users who leave their VM idle for three
months and then expect instantaneous response, but that's what
I meant by your memory partitioning helping only a few users.

Does that make sense?  Is it at least a step in the right direction?

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • RE: Distro kernel and 'virtualization server' vs. 'server that sometimes runs virtual instances' rant (was: Re: [Xen-devel] Re: [GIT PULL] Xen APIC hooks (with io_apic_ops)), Dan Magenheimer <=