[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] null domains after xl destroy



On 11/04/17 21:49, Dietmar Hahn wrote:
Am Dienstag, 11. April 2017, 20:03:14 schrieb Glenn Enright:
On 11/04/17 17:59, Juergen Gross wrote:
On 11/04/17 07:25, Glenn Enright wrote:
Hi all

We are seeing an odd issue with domu domains from xl destroy, under
recent 4.9 kernels a (null) domain is left behind.

I guess this is the dom0 kernel version?

This has occurred on a variety of hardware, with no obvious commonality.

4.4.55 does not show this behavior.

On my test machine I have the following packages installed under
centos6, from https://xen.crc.id.au/

~]# rpm -qa | grep xen
xen47-licenses-4.7.2-4.el6.x86_64
xen47-4.7.2-4.el6.x86_64
kernel-xen-4.9.21-1.el6xen.x86_64
xen47-ocaml-4.7.2-4.el6.x86_64
xen47-libs-4.7.2-4.el6.x86_64
xen47-libcacard-4.7.2-4.el6.x86_64
xen47-hypervisor-4.7.2-4.el6.x86_64
xen47-runtime-4.7.2-4.el6.x86_64
kernel-xen-firmware-4.9.21-1.el6xen.x86_64

I've also replicated the issue with 4.9.17 and 4.9.20

To replicate, on a cleanly booted dom0 with one pv VM, I run the
following on the VM

{
while true; do
 dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
done
}

Then on the dom0 I do this sequence to reliably get a null domain. This
occurs with oxenstored and xenstored both.

{
xl sync 1
xl destroy 1
}

xl list then renders something like ...

(null)                                       1     4     4     --p--d
9.8     0

Something is referencing the domain, e.g. some of its memory pages are
still mapped by dom0.

You can try
# xl debug-keys q
and further
# xl dmesg
to see the output of the previous command. The 'q' dumps domain
(and guest debug) info.
# xl debug-keys h
prints all possible parameters for more informations.

Dietmar.


I've done this as requested, below is the output.

(XEN) 'q' pressed -> dumping domain info (now=0x92:D6C271CE)
(XEN) General information for domain 0:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN) nr_pages=387072 xenheap_pages=5 shared_pages=0 paged_pages=0 dirty_cpus={0-1} max_pages=4294967295
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=0000000d
(XEN) Rangesets belonging to domain 0:
(XEN) I/O Ports { 0-1f, 22-3f, 44-60, 62-9f, a2-cfb, d00-1007, 100c-ffff }
(XEN)     log-dirty  { }
(XEN)     Interrupts { 1-30 }
(XEN)     I/O Memory { 0-fedff, fef00-ffffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN) XenPage 000000000020e9c5: caf=c000000000000002, taf=7400000000000002 (XEN) XenPage 000000000020e9c4: caf=c000000000000001, taf=7400000000000001 (XEN) XenPage 000000000020e9c3: caf=c000000000000001, taf=7400000000000001 (XEN) XenPage 000000000020e9c2: caf=c000000000000001, taf=7400000000000001 (XEN) XenPage 00000000000e7d2e: caf=c000000000000002, taf=7400000000000002
(XEN) NODE affinity for domain 0: [0]
(XEN) VCPU information and callbacks for domain 0:
(XEN) VCPU0: CPU0 [has=T] poll=0 upcall_pend=01 upcall_mask=00 dirty_cpus={0}
(XEN)     cpu_hard_affinity={0} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) VCPU1: CPU1 [has=T] poll=0 upcall_pend=00 upcall_mask=00 dirty_cpus={1}
(XEN)     cpu_hard_affinity={1} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) General information for domain 1:
(XEN)     refcnt=1 dying=2 pause_count=2
(XEN) nr_pages=2114 xenheap_pages=0 shared_pages=0 paged_pages=0 dirty_cpus={} max_pages=1280256
(XEN)     handle=a481c2eb-31e3-4ae6-9809-290e746c8eec vm_assist=0000000d
(XEN) Rangesets belonging to domain 1:
(XEN)     I/O Ports  { }
(XEN)     log-dirty  { }
(XEN)     Interrupts { }
(XEN)     I/O Memory { }
(XEN) Memory pages belonging to domain 1:
(XEN)     DomPage 0000000000071c00: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c01: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c02: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c03: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c04: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c05: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c06: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c07: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c08: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c09: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0a: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0b: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0c: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0d: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0e: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0f: caf=00000001, taf=7400000000000001
(XEN) NODE affinity for domain 1: [0]
(XEN) VCPU information and callbacks for domain 1:
(XEN) VCPU0: CPU0 [has=F] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={}
(XEN)     cpu_hard_affinity={0-3} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) VCPU1: CPU1 [has=F] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={}
(XEN)     cpu_hard_affinity={0-3} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) VCPU2: CPU2 [has=F] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={}
(XEN)     cpu_hard_affinity={0-3} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN) VCPU3: CPU3 [has=F] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={}
(XEN)     cpu_hard_affinity={0-3} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 4)
(XEN) Notifying guest 0:1 (virq 1, port 10)
(XEN) Notifying guest 1:0 (virq 1, port 0)
(XEN) Notifying guest 1:1 (virq 1, port 0)
(XEN) Notifying guest 1:2 (virq 1, port 0)
(XEN) Notifying guest 1:3 (virq 1, port 0)
(XEN) Shared frames 0 -- Saved frames 0

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.