WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 0/5] x86: properly propagate errors to hypercall

To: "Keir Fraser" <keir.xen@xxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 0/5] x86: properly propagate errors to hypercall callee
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Fri, 11 Mar 2011 10:44:18 +0000
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 11 Mar 2011 02:43:52 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C99F9FA3.14983%keir.xen@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4D79F89A0200007800035C4D@xxxxxxxxxxxxxxxxxx> <C99F9FA3.14983%keir.xen@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> On 11.03.11 at 10:45, Keir Fraser <keir.xen@xxxxxxxxx> wrote:
> On 11/03/2011 09:25, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
> 
>>>>> On 09.03.11 at 12:07, Keir Fraser <keir.xen@xxxxxxxxx> wrote:
>>> It seems unfortunate to propagate this to guests. Perhaps we should be
>>> making a memory pool for Xen's 1:1 mappings, big enough to allow a 4kB
>>> mapping of every page of RAM in the system, and allocate/free pagetables to
>>> that pool? The overhead of this would be no more than 0.2% of system memory,
>>> which seems reasonable to avoid an error case that is surely hard for a
>>> guest to react to or fix.
>> 
>> Before starting to look into eventual Linux side changes - do you
>> then have plans to go that pool route (which would make guest
>> side recovery attempts pointless)?
> 
> Not really. I was thinking about having a Linux-style mempool for making
> allocations more likely to succeed, but it's all a bit ugly really. It'll be
> interesting to see what you can do Linux-side, and whether it can pass
> muster for the Linux maintainers. You might at least be able to make the io
> remappings from device drivers failable (and maybe they are already).

ioremap() in general can fail, but failure of the writing the page
table entries gets propagated to the caller only on the legacy
kernels iirc (due to the lack of a return value of the accessor for
pv-ops).

The problem at hand, however, is with the vm_insert_...()
functions, which use set_pte_at(), which again has no return
value, so it'll need to be the accessors themselves to

(a) never utilize the writeable page tables feature on any path
that can alter cache attributes, and

(b) handle -ENOMEM from HYPERVISOR_update_va_mapping()
and HYPERVISOR_mmu_update() (without knowing much about
the context they're being called in).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel