[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Biweekly VMX status report. Xen: #21438 & Xen0: #a3e7c7...



That version of alloc_xenheap_pages is not built for x86_64.

 K.

On 02/06/2010 11:23, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:

> But in alloc_xenheap_pages(), we do unguard the page again, is that useful?
> 
> --jyh
> 
>> -----Original Message-----
>> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
>> Sent: Wednesday, June 02, 2010 5:41 PM
>> To: Jiang, Yunhong; Xu, Jiajun; xen-devel@xxxxxxxxxxxxxxxxxxx
>> Subject: Re: [Xen-devel] Biweekly VMX status report. Xen: #21438 & Xen0:
>> #a3e7c7...
>> 
>> On 02/06/2010 10:24, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
>> 
>>> (XEN) Pagetable walk from ffff83022fe1d000:
>>> (XEN)  L4[0x106] = 00000000cfc8d027 5555555555555555
>>> (XEN)  L3[0x008] = 00000000cfef9063 5555555555555555
>>> (XEN)  L2[0x17f] = 000000022ff2a063 5555555555555555
>>> (XEN)  L1[0x01d] = 000000022fe1d262 5555555555555555
>>> 
>>> I really can't imagine how this can happen considering the vmx_alloc_vmcs()
>>> is
>>> so straight-forward. My test machine is really magic.
>> 
>> Not at all. The free-memory pool was getting spiked with guarded (mapped
>> not-present) pages. The later unlucky allocator is the one that then
>> crashes.
>> 
>> I've just fixed this as xen-unstable:21504. The bug was a silly typo.
>> 
>> Thanks,
>> Keir
>> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.