WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Fix memory corruption in pygrub/fsimage python b

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] [PATCH] Fix memory corruption in pygrub/fsimage python binding
From: "Daniel P. Berrange" <berrange@xxxxxxxxxx>
Date: Tue, 30 Jan 2007 18:24:19 +0000
Delivery-date: Tue, 30 Jan 2007 10:24:00 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20070130173810.GG18642@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20070130173810.GG18642@xxxxxxxxxx>
Reply-to: "Daniel P. Berrange" <berrange@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.1i
On Tue, Jan 30, 2007 at 05:38:10PM +0000, Daniel P. Berrange wrote:
> In updating Fedora 7 to use Xen 3.0.4 we encountered a problem with the
> use of pygrub - it would trigger a memory corruption report by glibc's 
> free() routine having been given an invalid pointer. The pygrub process
> is thus terminated with extreme prejudice by glibc with SIGABRT
> 
> After a little painful memory debugging in python I discovered that the
> fsimage python binding is mistakenly using PyMem_DEL instead of PyObject_DEL
> to deallocate its objects.
> 
> PyMem_DEL simply ends up in a #define to free(). The memory associated with
> Python objects is not neccessarily allocated by malloc(), so calling free()
> is bogus. Python keeps an internal memory pool from which it allocates
> objects, so upon deallocation memory needs to be returned to this pool
> rather than free'd.
> 
> As for why no one has hit this before I can only assume this is showing up
> now because of ever stricted glibc memory checking, internal changes in 
> python 2.5 memory handling, or a combo of both + a little good/bad luck

Turns out this is a change in Python 2.5

http://docs.python.org/whatsnew/ports.html

"Note that this change means extension modules must be more careful when 
 allocating memory. Python's API has many different functions for allocating 
 memory that are grouped into families. For example, PyMem_Malloc(), 
 PyMem_Realloc(), and PyMem_Free() are one family that allocates raw memory, 
 while PyObject_Malloc(), PyObject_Realloc(), and PyObject_Free() are another 
 family that's supposed to be used for creating Python objects.

 Previously these different families all reduced to the platform's malloc() 
 and free() functions. This meant it didn't matter if you got things wrong 
 and allocated memory with the PyMem function but freed it with the PyObject 
 function. With 2.5's changes to obmalloc, these families now do different 
 things and mismatches will probably result in a segfault. You should 
 carefully test your C extension modules with Python 2.5. "

I checked the rest of the python bindings in tools/ and didn't find any other
places where we'd obviously hit this problem

Regards,
Dan.
-- 
|=- Red Hat, Engineering, Emerging Technologies, Boston.  +1 978 392 2496 -=|
|=-           Perl modules: http://search.cpan.org/~danberr/              -=|
|=-               Projects: http://freshmeat.net/~danielpb/               -=|
|=-  GnuPG: 7D3B9505   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505  -=| 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>