[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 7/9] x86/PVH: actually show Dom0's stacks from debug key '0'


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 23 Sep 2021 16:43:27 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=ex+r5cO30xvtc57Sq7mNz5e6J4VNRSJqIfeh3v66rFU=; b=lsHftSTBKqFjGybhE+qElsLgdVzXFADDxoepgdzjnYrEdSYw6Luuwj/jKvOXdSGH53ljjY9hLoja+huMPgen1MJYjlZQ0FoT3kxTeJSbF98+u/EKc/gh3l889AmFGaTX7c9VxkaGvdq0WbqF090cTwZYb1Q4wWHZV29UUko9W31Uy+xrP1XRZtC2zB4nDpwoPtpurnNp/zT0G9VjFb/YeGhXMRAee58fgH6xEBWfgzoKO4gnQhUd+Ag4YPvCr+KuAe7H6HDpRonMhfWo94/1ooEkT5ZyGjC7taeG8/cqqS4AGtaufnTXWwZES2YBGWu/FOJLODh5tnFdKfq2Wm1CCA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bNW5xPJnRLT6xxqSL/qUTc3aUur+3sDo1vUIDLg5KlH84cQNqE18UsGmGpWzXa0b9RvzyNX/BHra5/oVKLW0UBqW8clRKHllFFSKO8stACRY5V4iaVS2VOeo+bqmyU0P6PvWOQbMPVITm+mehU63uP4Z7gMN6GhDpTRdoSCEz/zTZcxOCGYHAzNUaEkqZGKoGBJYwNVoAcAHr+iYynElNxGo9Ihy/u2LaLIGu1Ex6xZ9gpJrAU5N0cDpNgLE6my7j5Scsnr+UAlvo2co8RJTmkrIjdMIYSLDBH/A9s9WaxWD6eIMm21TwNfxB0EUlNIAfy7GEvCPaG+0DzxD3q6SNQ==
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Thu, 23 Sep 2021 14:43:47 +0000
  • Ironport-data: A9a23:gmz11qPxArvmqeHvrR17kMFynXyQoLVcMsEvi/4bfWQNrUol12MHy WMeWm3SPKqPN2vxfdtwOtu29UNXvpHcyNIxQAto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h ynLQoCYdKjYdpJYz/uUGuCJQUNUjMlkfZKhTr6ZUsxNbVU8En552Eg4w7dRbrNA2rBVPSvc4 bsenOWHULOV82Yc3rU8sv/rRLtH5ZweiRtA1rAMTakjUGz2zhH5OKk3N6CpR0YUd6EPdgKMq 0Qv+5nilo/R109F5tpICd8XeGVSKlLZFVDmZna7x8FOK/WNz8A/+v9TCRYSVatYoxCNj9Bq0 Yxcib/qbQo5JPXBl94QXiANRkmSPYUekFPGCX22sMjVxEzaaXr8hf5pCSnaP6VBpLwxWzsXs 6VFdnZdNXhvhMrvqF6/YvNrick5atHiIasUu216zCGfBvEjKXzGa/iRtYEFhWlq7ixINfGCY 9ozMRw/UCvvcgVsGFYyJ6sZrd790xETdBUH8QnI9MLb+VP70whZwLXrdt3PdbSiVchT20qVu G/C12D4GQ0BcsySzyKf9XChjfOJmjn0MKoQHrCl8v9hgHWI23ceThYRUDOGTeKR0xDkHYgFc gpNp3Ro/fNaGFGXosfVehmooGWd5w8geIB0Hv0gsAyC8KfZ2lPMboQbdQJpZNsjvc4wYDUl0 F6Vgt/kbQBSXK2ppWG1rejP8GnsUcQBBSpbP3ZVEFdcizX2iNxr1nryosBf/LlZZzEfMQr5x SyD5AM6jq8a5SLg//TmpQ2f695AS56gc+LU2uk1dj7+hu+aTNT8D2BN1bQ9xawbRGp+ZgPd1 EXoY+DEsIgz4WilzURhutklErCz/OqiOzbBm1NpFJRJ323zoCT4JtgIu28kdRYB3iM4ldnBO hS7VeR5vsM7AZdXRfUvP9LZ5zoCl8AM6ugJptiLN4ETM/CdhSeM/T10ZF744oweuBJErE3LA r/CKZzEJS9DUcxPlWPqL89Age5D7n1vngv7GMGkpylLJJLDPRZ5v59eawDQBg34hYvZyDjoH yF3bZDXlEkPDLOkMkE6M+c7dDg3EJTyPrivw+R/fe+fOAt2XmYnDv7a27Q6fIJ52a9Sk4/1E ruVACe0EXLz2i/KLxukcHdmZO+9VJpztytjbyctIUypyz4oZoP2tPUTcJ4+fL8G8u1/zKErE 6lZKpvYWvkfGC7a/zk9bIXmqNAwfhqcmg/TbTGuZyIyfsA8SlWRqMPkZAbm6AIHEjGz6Zklu 7Sl2w6CGcgDSg1uAdz4cvWqy1/t73ERlPgrBxnDI8VJeVWq+49vcnSjgvgyKsAKCBPC2jrFi FrGXUZG/bHA+tZn/sPIiKaIq5aSP9F/RkcKTXPG6buWNDXB+jbxy4F3T+vVLyvWU3n5+fv+a LwNne38KvAOgH1Dr5F4T+Rw1as76tbi++1awwBjECmZZlinEOo9cHyP3M0JvaxR3L5J/wCxX xvXqNVdPLyIPuLjEUIQe1V5PrjSi6lMl2mA9+kxLWX7+DRzreiOXkhlNhWRjDBQceluO4Q/z OZ94MMb5mRTUPbx3gpqWsyMy1mxEw==
  • Ironport-hdrordr: A9a23:i4smHKsyZmt8Lg3B59aXTsPu7skDetV00zEX/kB9WHVpm5qj5q STdYcgtCMc6Qx/ZJhOo7y90cW7K080sKQFgrX5Xo3IYOCFgguVxehZhOPfKn/bak/DH4VmpM JdmsZFZeEZRDJB4/rH3A==
  • Ironport-sdr: G11mm8YVrmt3AbN3jnhqJSiNH0WcFi1DiWudyZi5WJTUbqSxE6rlHFRAYebxiUVSYFCeNqWXzr 9QpKyFxInjBv6YaQHHZJv1p1bzF7fxhI7M3YXyLfJk0uzL5mukAEeniVKMA43/R3jqT0qA2DGN XD1ZagwvcK0yOu+XCkWeGQs46nrlHBtD+MIA2zUq7B3vP4HqPLE+l+CpFchTjNWWiF2Kz//ijW TjIZfzt86fULGYVf9icFAcSj88mF2XzBGtXqYoiY9eNhMbhgY+LZ829/br9RibrRy7rsGZKH3B Dm8Wb41V/uoUxIUxl9/YNIPj
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Sep 23, 2021 at 12:47:26PM +0200, Jan Beulich wrote:
> On 23.09.2021 12:31, Roger Pau Monné wrote:
> > On Tue, Sep 21, 2021 at 09:20:00AM +0200, Jan Beulich wrote:
> >> --- a/xen/arch/x86/hvm/hvm.c
> >> +++ b/xen/arch/x86/hvm/hvm.c
> >> @@ -3408,6 +3408,15 @@ enum hvm_translation_result hvm_copy_fro
> >>                        PFEC_page_present | pfec, pfinfo);
> >>  }
> >>  
> >> +enum hvm_translation_result hvm_copy_from_vcpu_linear(
> >> +    void *buf, unsigned long addr, unsigned int size, struct vcpu *v,
> >> +    unsigned int pfec)
> > 
> > Even if your current use case doesn't need it, would it be worth
> > adding a pagefault_info_t parameter?
> 
> I'd prefer to add such parameters only once they become necessary.
> 
> >> --- a/xen/arch/x86/traps.c
> >> +++ b/xen/arch/x86/traps.c
> >> @@ -364,6 +364,71 @@ static void show_guest_stack(struct vcpu
> >>      printk("\n");
> >>  }
> >>  
> >> +static void show_hvm_stack(struct vcpu *v, const struct cpu_user_regs 
> >> *regs)
> >> +{
> >> +#ifdef CONFIG_HVM
> >> +    unsigned long sp = regs->rsp, addr;
> >> +    unsigned int i, bytes, words_per_line, pfec = PFEC_page_present;
> >> +    struct segment_register ss, cs;
> >> +
> >> +    hvm_get_segment_register(v, x86_seg_ss, &ss);
> >> +    hvm_get_segment_register(v, x86_seg_cs, &cs);
> >> +
> >> +    if ( hvm_long_mode_active(v) && cs.l )
> >> +        i = 16, bytes = 8;
> >> +    else
> >> +    {
> >> +        sp = ss.db ? (uint32_t)sp : (uint16_t)sp;
> >> +        i = ss.db ? 8 : 4;
> >> +        bytes = cs.db ? 4 : 2;
> >> +    }
> >> +
> >> +    if ( bytes == 8 || (ss.db && !ss.base) )
> >> +        printk("Guest stack trace from sp=%0*lx:", i, sp);
> >> +    else
> >> +        printk("Guest stack trace from ss:sp=%04x:%0*lx:", ss.sel, i, sp);
> >> +
> >> +    if ( !hvm_vcpu_virtual_to_linear(v, x86_seg_ss, &ss, sp, bytes,
> >> +                                     hvm_access_read, &cs, &addr) )
> >> +    {
> >> +        printk(" Guest-inaccessible memory\n");
> >> +        return;
> >> +    }
> >> +
> >> +    if ( ss.dpl == 3 )
> >> +        pfec |= PFEC_user_mode;
> >> +
> >> +    words_per_line = stack_words_per_line * (sizeof(void *) / bytes);
> >> +    for ( i = 0; i < debug_stack_lines * words_per_line; )
> >> +    {
> >> +        unsigned long val = 0;
> >> +
> >> +        if ( (addr ^ (addr + bytes - 1)) & PAGE_SIZE )
> >> +            break;
> >> +
> >> +        if ( !(i++ % words_per_line) )
> >> +            printk("\n  ");
> >> +
> >> +        if ( hvm_copy_from_vcpu_linear(&val, addr, bytes, v,
> >> +                                       pfec) != HVMTRANS_okay )
> > 
> > I think I'm confused, but what about guests without paging enabled?
> > Don't you need to use hvm_copy_from_guest_phys (likely transformed
> > into hvm_copy_from_vcpu_phys)?
> 
> __hvm_copy() calls hvm_translate_get_page() telling it whether the
> input is a linear or physical address. hvm_translate_get_page() will
> use paging_gva_to_gfn() in this case. The HAP backing function
> changes when the guest {en,dis}ables paging, while shadow code deals
> with paging disabled by installing an identity mapping
> (d->arch.paging.shadow.unpaged_pagetable) which it would then end up
> (needlessly) walking.
> 
> It really is - afaict - intentional for callers to not have to deal
> with the special case.

I always forget that we change the paging_mode handlers when switching
between modes.

> >> @@ -663,14 +728,22 @@ void vcpu_show_execution_state(struct vc
> >>      }
> >>  #endif
> >>  
> >> -    /* Prevent interleaving of output. */
> >> -    flags = console_lock_recursive_irqsave();
> >> +    /*
> >> +     * Prevent interleaving of output if possible. For HVM we can't do 
> >> so, as
> >> +     * the necessary P2M lookups involve locking, which has to occur with 
> >> IRQs
> >> +     * enabled.
> >> +     */
> >> +    if ( !is_hvm_vcpu(v) )
> >> +        flags = console_lock_recursive_irqsave();
> >>  
> >>      vcpu_show_registers(v);
> >> -    if ( guest_kernel_mode(v, &v->arch.user_regs) )
> >> +    if ( is_hvm_vcpu(v) )
> >> +        show_hvm_stack(v, &v->arch.user_regs);
> > 
> > Would it make sense to unlock in show_hvm_stack, and thus keep the
> > printing of vcpu_show_registers locked even when in HVM context?
> 
> Perhaps not _in_, but before calling it, yet - why not.

Indeed, with that:

Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.