WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

[PATCH] xen/ia64 memory_exchange work around (was Re: [Xen-ia64-devel] s

To: Alex Williamson <alex.williamson@xxxxxx>
Subject: [PATCH] xen/ia64 memory_exchange work around (was Re: [Xen-ia64-devel] steal_page(MEMF_no_refcount) page->count_info no longer consistent)
From: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
Date: Mon, 5 Mar 2007 12:35:26 +0900
Cc: xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sun, 04 Mar 2007 19:34:43 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20070301102958.GH16354%yamahata@xxxxxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1172706927.5703.36.camel@lappy> <20070301025927.GB16354%yamahata@xxxxxxxxxxxxx> <20070301102958.GH16354%yamahata@xxxxxxxxxxxxx>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
Here is the work around patch. I was able to boot dom0 with this patch.
The right fix is to modify memory_exchange() to be aware of 
ia64 page reference count convention, and convince x86 developper.
Since It may take a while, I sent this work around patch.


On Thu, Mar 01, 2007 at 07:29:58PM +0900, Isaku Yamahata wrote:
> On Thu, Mar 01, 2007 at 11:59:27AM +0900, Isaku Yamahata wrote:
> > On Wed, Feb 28, 2007 at 04:55:27PM -0700, Alex Williamson wrote:
> > 
> > >    Current xen-unstable.hg tip has a problem booting dom0 that I'd like
> > > your opinion on.  We're failing the check in steal_page() that ensures
> > > that count_info is 2 when called with the MEMF_no_refcount flag.  In
> > > fact, it seems that steal_page() is now getting called for pages that
> > > have a count_info of 1 or 2.  Are we being overly paranoid with this
> > > check, or is this an indication of a deeper problem?  The change seems
> > > to have been introduced by the recent memory allocator changes which
> > > removed the bit width restrictions.  Thanks,
> > 
> > By the coarse code check, I couldn't find the case that count_info = 1
> > with MEMF_no_refcount can occur.
> > The reference count semantics seems to have been changed in a subtle way.
> > I'll try to reproduce it and take a deeper look into it.
> 
> I found the root cause.
> In fact XENMEM_exchange(memory_exchange()) has been broken on ia64.
> Especially when memory_exchange() failed, page->count_info
> is left in broken state (= PGC_allocated | 1).
> The next time XENMEM_exchange hypercall is called, the message is printed out.
> Probably we have to revise memory_exchange() in common code and
> convince x86 developers.
> 
> So the temporal work around might be necessary.
> The trigger is c/s 13366:ed73ff8440d8 in xen-unstable.hg.
> swiotlb_init_with_default_size() was changed to pass
> dma address bits which causes XENMEM_exchange fail.
> The easy temporal work around is to modify swiotlb_init_with_default_size().
> -- 
> yamahata
> 
> _______________________________________________
> Xen-ia64-devel mailing list
> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-ia64-devel
> 

-- 
yamahata

Attachment: 14175_daca4d71d826_xen_ia64_xenmem_exchange_bug_work_around.patch
Description: Text Data

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
<Prev in Thread] Current Thread [Next in Thread>