[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v2 02/12] x86: don't use hypervisor stack size for dumping guest stacks

On 23/01/18 11:11, Jan Beulich wrote:
>>>> On 23.01.18 at 10:58, <jgross@xxxxxxxx> wrote:
>> On 23/01/18 10:26, Jan Beulich wrote:
>>>>>> On 22.01.18 at 13:32, <jgross@xxxxxxxx> wrote:
>>>> show_guest_stack() and compat_show_guest_stack() stop dumping the
>>>> stack of the guest whenever its virtual address reaches the same
>>>> alignment which is used for the hypervisor stacks.
>>>> Remove this arbitrary limit and try to dump a fixed number of lines
>>>> instead.
>>> Hmm, I can see your point, but before looking at the change in detail
>>> I think we need to agree on what behavior we want. Dumping
>>> arbitrary data as if it was a part of the stack isn't very helpful, limiting
>>> the risk of which is, I think, the reason for the way things currently
>>> work (assuming that guest kernels won't have stacks larger than Xen
>>> itself, and that they too would align them). What would perhaps be
>>> better is for the guest to supply information about the restrictions it
>>> enforces on its stacks, which Xen could then use here. In the
>>> absence of such hints using the values currently being used would
>>> possibly make sense.
>> Currently the stack dump will have the same fixed number of lines as
>> with my patch. I'm only removing the premature end of dumping whenever
>> the stack address crosses a 32kB boundary. Linux 64 bit pv guests are
>> using 16kB stack size. So using this boundary would be more natural.
> IOW your change converts a 50:50 chance of dumping non-stack
> data to 100% (all in case the stack pointer isn't far away from the
> stack start).

I'd rather dump some non-stack data than omitting some stack data.

I can't see show_guest_stack() is limited to guest kernel mode. User
stacks can be much larger than 32kB.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.