[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] memory question



Hello,

I have some strange effects using xen 3.2.1-rc5 with linux 2.6.21.7
(compiled from fedora core8 sources)

The servers I'm using for testing all have a Intel Core 2 Quad
processor, one has 4GB Ram, the others 2. The one with more RAM also has
a bigger disk (750GB), besides that they're identical.

on the 4GB machine:

        (XEN) System RAM: 4083MB (4181724kB)

        # grep Vmalloc /proc/meminfo 
        VmallocTotal: 34359738367 kB
        VmallocUsed:       628 kB
        VmallocChunk: 34359737719 kB

on the 2 GB machine:

        (XEN) System RAM: 2020MB (2069252kB)

        # grep Vmalloc /proc/meminfo 
        VmallocTotal: 34359738367 kB
        VmallocUsed:       868 kB
        VmallocChunk: 34359737463 kB

If I boot the dom0 limiting the memory to 64M:
        -> 2GB server boots up and works fine
        -> 4GB server panics while booting: "Out of low memory"

limiting to 128M:
        -> 2GB server boots up and works fine
        -> 4GB boots, but I get many segfaults (even more if I run
                mkfs.ext3 on a large partition)

10:44:23: python[2649]: segfault at 00000000005f8920 rip 00000000005f8920 rsp 
00000000414007c0 error 15
10:46:31: python[4518]: segfault at 00000000006781f0 rip 00000000006781f0 rsp 
00007fffaf0460c0 error 15
10:50:01: grep[4567]: segfault at ffffffff998b94d8 rip 00002aaaaacd81f2 rsp 
00007fff3d330840 error 4
10:51:54: python[4572]: segfault at 0000000000000008 rip 0000000000000008 rsp 
00007fff2dbc0a28 error 14
10:52:27: rtm[4586]: segfault at 0000000000000000 rip 00002aaaab205890 rsp 
00007ffffa453108 error 6
11:06:33: rtm[4970]: segfault at 0000000000000000 rip 0000000000000000 rsp 
00007fffae93bc40 error 14
11:07:15: rtm[4976]: segfault at 00000000000000f0 rip 00002aaaaac4b3ed rsp 
00007fffad391da0 error 4
11:09:13: rtm[4994]: segfault at 0000000000000008 rip 00002aaaaac5b237 rsp 
00007fff23134410 error 4
11:10:14: rtm[4982]: segfault at 000000000000001f rip 00002aaaaac29d73 rsp 
00007fff556cde38 error 4
11:10:17: cron[2709]: segfault at 0000000000000000 rip 0000000000000000 rsp 
00007fff9d32d5f0 error 14
11:10:31: rtm[4956]: segfault at 0000000000000001 rip 00002aaaaac5bc15 rsp 
00007fffd8b9c2a0 error 4
11:10:57: ls[5027]: segfault at 0000000000000000 rip 0000000000000000 rsp 
00007fffbfc0a050 error 14
11:10:59: ls[5025]: segfault at 0000000000000000 rip 00002aaaaaaaf8cb rsp 
00007fffb447e700 error 4
11:11:10: rtm[4953]: segfault at 0000000000000000 rip 00002aaaaac50070 rsp 
00007fff3c5faf10 error 4
11:18:42: fsck.ext3[2361]: segfault at 00002aaaad838010 rip 00002aaaad838010 
rsp 00007fff60d5d500 error 15
11:20:05: init[1]: segfault at 00007fffd41f22d8 rip 00007fffd41f22d8 rsp 
00007fffd41f22b0 error 15
11:20:37: init[1]: segfault at 00007fffd41f22d8 rip 00007fffd41f22d8 rsp 
00007fffd41f22b0 error 15
11:21:50: raid.pl[3141]: segfault at 0000000000a37c58 rip 00002aaaab1fc879 rsp 
00007ffffd0b5120 error 4
11:21:52: usage.pl[3143]: segfault at 0000000000a37c58 rip 00002aaaab1fc879 rsp 
00007fff54997140 error 4
11:21:52: smart.pl[3140]: segfault at 0000000000a37c58 rip 00002aaaab1fc879 rsp 
00007fff549982b0 error 4
11:21:52: hddinfo.pl[3145]: segfault at 0000000000a37c58 rip 00002aaaab1fc879 
rsp 00007fffa794b070 error 4
11:21:52: rtm[3139]: segfault at 00000000007149d0 rip 00000000007149d0 rsp 
00007fff277a6930 error 15

What can I do to find the source of this behaviour? Is it linux, or is
it Xen?

With best regards,
Felix Krohn
-- 
Felix Krohn                        / After silence, that which comes   ]
|-> smtp, xmpp: felix@xxxxxx      / nearest to expressing the inexpres-]
|-> gpg: 0x1C246E3B              / sible is music.       [Aldous Huxley]
|-> https://kro.hn              / http://www.flickr.com/kro_royal      ]

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.