WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] strange xm dmesg output

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] strange xm dmesg output
From: Mark Williamson <mark.williamson@xxxxxxxxxxxx>
Date: Thu, 21 Aug 2008 19:32:10 +0100
Cc: Marco Strullato <marco.strullato@xxxxxxxxx>
Delivery-date: Thu, 21 Aug 2008 11:32:48 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <b9f669850808210118y3fbb4c0ekfc8d7fec7d589c29@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <b9f669850808210118y3fbb4c0ekfc8d7fec7d589c29@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.9
Hi there,


On Thursday 21 August 2008, Marco Strullato wrote:
> Hi, today I noticed this error: have you ever seen that?
> My xen is the original 3.2 on centos 5 64 bit built from source
>
> [root@hyp12 ~]# xm dmesg
> 7 messages suppressed.
> (XEN) mm.c:645:d3 Non-privileged (3) attempt to map I/O space 000eada8
> (XEN) printk: 439 messages suppressed.
> (XEN) mm.c:645:d3 Non-privileged (3) attempt to map I/O space 000eada8
> (XEN) printk: 4399 messages suppressed.

This is not abnormal, I've seen it plenty of times.

It happens when a guest OS tries to map some memory-mapped IO areas that it 
doesn't have access to.  It typically means that *something* in the guest 
doesn't realise it's running on Xen and so is trying to do something silly 
like repeatedly attempting to access an MMIO region that isn't available...

This only happens for PV domains since HVM domains get some fake MMIO space to 
keep them happy.

What's the guest running?  It should be possible in principle to figure out 
(e.g. through some detective work on your part) what is causing the messages 
and then just disable that.

It's harmless if the guest is running OK anyhow (which it should).  If it's 
not bothering you (as long as it's not polluting your logs too much or 
pushing out useful information).

Cheers,
Mark

> (XEN) mm.c:645:d3 Non-privileged (3) attempt to map I/O space 000d97ca
> (XEN) printk: 131 messages suppressed.
> (XEN) mm.c:645:d3 Non-privileged (3) attempt to map I/O space 000eada8
> (XEN) printk: 4431 messages suppressed.
> (XEN) mm.c:645:d3 Non-privileged (3) attempt to map I/O space 000eada8
> (XEN) printk: 109 messages suppressed.
> (XEN) mm.c:645:d3 Non-privileged (3) attempt to map I/O space 000d97ca
> (XEN) printk: 4553 messages suppressed.
> (XEN) mm.c:645:d3 Non-privileged (3) attempt to map I/O space 000ccb48
> (XEN) printk: 1589 messages suppressed.
> (XEN) mm.c:645:d3 Non-privileged (3) attempt to map I/O space 000efc5a
> (XEN) printk: 4417 messages suppressed.
> (XEN) mm.c:645:d3 Non-privileged (3) attempt to map I/O space 000d97ca
> (XEN) printk: 1619 messages suppressed.
> (XEN) mm.c:645:d3 Non-privileged (3) attempt to map I/O space 000f9c98
> (XEN) printk: 4567 messages suppressed.
> (XEN) mm.c:645:d3 Non-privileged (3) attempt to map I/O space 000f9c98
>
>
> Regards,
>
> Marco Strullato
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users



-- 
Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/)

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>