[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [xen-unstable test] 123379: regressions - FAIL



>>> On 12.06.18 at 17:58, <jgross@xxxxxxxx> wrote:
> Trying to reproduce the problem in a limited test environment finally
> worked: doing a loop of "xl save -c" produced the problem after 198
> iterations.
> 
> I have asked a SUSE engineer doing kernel memory management if he
> could think of something. His idea is that maybe some kthread could be
> the reason for our problem, e.g. trying page migration or compaction
> (at least on the test machine I've looked at compaction of mlocked
> pages is allowed: /proc/sys/vm/compact_unevictable_allowed is 1).

Iirc the primary goal of compaction is to make contiguous memory
available for huge page allocations. PV not using huge pages, this is
of no interest here. The secondary consideration of physically
contiguous I/O buffer is an illusion only under PV, so perhaps not
much more of an interest (albeit I can see drivers wanting to allocate
physically contiguous buffers nevertheless now and then, but I'd
expect this to be mostly limited to driver initialization and device hot
add).

So it is perhaps at least worth considering whether to turn off
compaction/migration when running PV. But the problem would still
need addressing then mid-term, as PVH Dom0 would have the same
issue (and of course DomU, i.e. including HVM, can make hypercalls
too, and hence would be affected as well, just perhaps not as
visibly).

> In order to be really sure nothing in the kernel can temporarily
> switch hypercall buffer pages read-only or invalid for the hypervisor
> we'll have to modify the privcmd driver interface: it will have to
> gain knowledge which pages are handed over to the hypervisor as buffers
> in order to be able to lock them accordingly via get_user_pages().

So are you / is he saying that mlock() doesn't protect against such
playing with process memory? Teaching the privcmd driver of all
the indirections in hypercall request structures doesn't look very
attractive (or maintainable). Or are you thinking of the caller
providing sideband information describing the buffers involved,
perhaps along the lines of how dm_op was designed?

There's another option, but that has potentially severe drawbacks
too: Instead of returning -EFAULT on buffer access issues, we
could raise #PF on the very hypercall insn. Maybe something to
consider as an opt-in for PV/PVH, and as default for HVM.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.