WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [GIT PULL] xen /proc/mtrr implementation

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: [Xen-devel] Re: [GIT PULL] xen /proc/mtrr implementation
From: ebiederm@xxxxxxxxxxxx (Eric W. Biederman)
Date: Fri, 15 May 2009 16:26:11 -0700
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Ingo Molnar <mingo@xxxxxxx>, the arch/x86 maintainers <x86@xxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>
Delivery-date: Mon, 18 May 2009 07:33:00 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4A0DCC11.10307@xxxxxxxx> (Jeremy Fitzhardinge's message of "Fri\, 15 May 2009 13\:09\:53 -0700")
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1242170864-13560-1-git-send-email-jeremy@xxxxxxxx> <20090513133021.GA7277@xxxxxxx> <4A0ADBA2.2020300@xxxxxxxx> <20090515182757.GA19256@xxxxxxx> <4A0DCC11.10307@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Gnus/5.11 (Gnus v5.11) Emacs/22.2 (gnu/linux)
Jeremy Fitzhardinge <jeremy@xxxxxxxx> writes:

> Ingo Molnar wrote:
>> Right now there's no MTRR support under Xen guests and the Xen hypervisor was
>> able to survive, right? Why should we do it under dom0?
>>   
>
> Because dom0 has direct hardware access, and is running real device drivers.
> domU guests don't see physical memory, and so MTRR has no relevance for them.


>> The MTRR code is extremely fragile, we dont really need an added layer
>> there. _Especially_ since /proc/mtrr is an obsolete API.
>>   
>
> There's no added layer there.  I'm just adding another implementation of
> mtrr_ops.
>
> /proc/mtrr is in wide use today.  It may be planned for obsolescence, but
> there's no way you can claim its obsolete today (my completely up-to-date F10 
> X
> server is using it, for example).  We don't break oldish usermode ABIs in new
> kernels.

Sure it is.  There is a better newer replacement.  It is taking a while to
get userspace transitioned but that is different.  Honestly I am puzzled
why that it but whatever.

> Besides, the MTRR code is also a kernel-internal API, used by DRM and other
> drivers to configure the system MTRR state.  Those drivers will either perform
> badly or outright fail if they can't set the appropriate cachability 
> properties.
> That is not obsolete in any way.

There are about 5 of them so let's fix them.
With PAT we are in a much better position both for portability and for
flexibility.

>> If you want to allow a guest to do MTRR ops, you can do it by catching the
>> native kernel accesses to the MTRR space. There's no guest side support 
>> needed
>> for that.
>>   
>
> MTRR can't be virtualized like that.  It can't be meaningfully multiplexed, 
> and
> must be set in a uniform way on all physical CPUs.  Guests run on virtual 
> CPUs,
> and don't have any knowledge of what the mapping of VCPU to PCPU is, or even 
> any
> visibility of all PCPUs.
>
> It is not a piece of per-guest state; it is system-wide property, maintained 
> by
> Xen.  These patches add the mechanism for dom0 (=hardware control domain) to
> tell Xen what state they should be in.

Is it possible to fix PAT and get that working first.   That is very definitely
the preferend API.

Eric

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>