[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [xen-unstable test] 123379: regressions - FAIL
On 12/06/18 17:58, Juergen Gross wrote: > On 08/06/18 12:12, Juergen Gross wrote: >> On 07/06/18 13:30, Juergen Gross wrote: >>> On 06/06/18 11:40, Juergen Gross wrote: >>>> On 06/06/18 11:35, Jan Beulich wrote: >>>>>>>> On 05.06.18 at 18:19, <ian.jackson@xxxxxxxxxx> wrote: >>>>>>>> test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 14 >>>>>>>> guest-saverestore.2 >>>>>> >>>>>> I thought I would reply again with the key point from my earlier mail >>>>>> highlighted, and go a bit further. The first thing to go wrong in >>>>>> this was: >>>>>> >>>>>> 2018-05-30 22:12:49.320+0000: xc: Failed to get types for pfn batch (14 >>>>>> = Bad address): Internal error >>>>>> 2018-05-30 22:12:49.483+0000: xc: Save failed (14 = Bad address): >>>>>> Internal error >>>>>> 2018-05-30 22:12:49.648+0000: libxl-save-helper: complete r=-1: Bad >>>>>> address >>>>>> >>>>>> You can see similar messages in the other logfile: >>>>>> >>>>>> 2018-05-30 22:12:49.650+0000: libxl: >>>>>> libxl_stream_write.c:350:libxl__xc_domain_save_done: Domain 3:saving >>>>>> domain: domain responded to suspend request: Bad address >>>>>> >>>>>> All of these are reports of the same thing: xc_get_pfn_type_batch at >>>>>> xc_sr_save.c:133 failed with EFAULT. I'm afraid I don't know why. >>>>>> >>>>>> There is no corresponding message in the host's serial log nor the >>>>>> dom0 kernel log. >>>>> >>>>> I vaguely recall from the time when I had looked at the similar Windows >>>>> migration issues that the guest is already in the process of being cleaned >>>>> up when these occur. Commit 2dbe9c3cd2 ("x86/mm: silence a pointless >>>>> warning") intentionally suppressed a log message here, and the >>>>> immediately following debugging code (933f966bcd x86/mm: add >>>>> temporary debugging code to get_page_from_gfn_p2m()) was reverted >>>>> a little over a month later. This wasn't as a follow-up to another patch >>>>> (fix), but following the discussion rooted at >>>>> https://lists.xenproject.org/archives/html/xen-devel/2017-06/msg00324.html >>>> >>>> That was -ESRCH, not -EFAULT. >>> >>> I've looked a little bit more into this. >>> >>> As we are seeing EFAULT being returned by the hypervisor this either >>> means the tools are specifying an invalid address (quite unlikely) >>> or the buffers are not as MAP_LOCKED as we wish them to be. >>> >>> Is there a way to see whether the host was experiencing some memory >>> shortage, so the buffers might have been swapped out? >>> >>> man mmap tells me: "This implementation will try to populate (prefault) >>> the whole range but the mmap call doesn't fail with ENOMEM if this >>> fails. Therefore major faults might happen later on." >>> >>> And: "One should use mmap(2) plus mlock(2) when major faults are not >>> acceptable after the initialization of the mapping." >>> >>> With osdep_alloc_pages() in tools/libs/call/linux.c touching all the >>> hypercall buffer pages before doing the hypercall I'm not sure this >>> could be an issue. >>> >>> Any thoughts on that? >> >> Ian, is there a chance to dedicate a machine to a specific test trying >> to reproduce the problem? In case we manage to get this failure in a >> reasonable time frame I guess the most promising approach would be to >> use a test hypervisor producing more debug data. If you think this is >> worth doing I can write a patch. > > Trying to reproduce the problem in a limited test environment finally > worked: doing a loop of "xl save -c" produced the problem after 198 > iterations. > > I have asked a SUSE engineer doing kernel memory management if he > could think of something. His idea is that maybe some kthread could be > the reason for our problem, e.g. trying page migration or compaction > (at least on the test machine I've looked at compaction of mlocked > pages is allowed: /proc/sys/vm/compact_unevictable_allowed is 1). > > In order to be really sure nothing in the kernel can temporarily > switch hypercall buffer pages read-only or invalid for the hypervisor > we'll have to modify the privcmd driver interface: it will have to > gain knowledge which pages are handed over to the hypervisor as buffers > in order to be able to lock them accordingly via get_user_pages(). > > While this is a possible explanation of the fault we are seeing it might > be related to another reason. So I'm going to apply some modifications > to the hypervisor to get some more diagnostics in order to verify the > suspected kernel behavior is really the reason for the hypervisor to > return EFAULT. I was lucky. Took only 39 iterations this time. The debug data confirms the theory that the kernel is setting the PTE to invalid or read only for a short amount of time: (XEN) fixup for address 00007ffb9904fe44, error_code 0002: (XEN) Pagetable walk from 00007ffb9904fe44: (XEN) L4[0x0ff] = 0000000458da6067 0000000000019190 (XEN) L3[0x1ee] = 0000000457d26067 0000000000018210 (XEN) L2[0x0c8] = 0000000445ab3067 0000000000006083 (XEN) L1[0x04f] = 8000000458cdc107 000000000001925a (XEN) Xen call trace: (XEN) [<ffff82d0802abe31>] __copy_to_user_ll+0x27/0x30 (XEN) [<ffff82d080272edb>] arch_do_domctl+0x5a8/0x2648 (XEN) [<ffff82d080206d5d>] do_domctl+0x18fb/0x1c4e (XEN) [<ffff82d08036d1ba>] pv_hypercall+0x1f4/0x43e (XEN) [<ffff82d0803734a6>] lstar_enter+0x116/0x120 The page was writable again when the page walk data has been collected, but A and D bits still are 0 (which should not be the case in case the kernel didn't touch the PTE, as the hypervisor read from that page some instructions before the failed write). Starting with the Xen patches now... Juergen _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |