[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 5/9] x86/PVH: actually show Dom0's register state from debug key '0'


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 22 Sep 2021 17:48:13 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=n4NAxChxHIbz2sxh8SXGwiWz7oHmjAwd5Fcf/V1mcnY=; b=AXLZxi8xDwFz5D2Yr5UOb7KBj/iCHxH3OZAnucdyRK6je3jjRXyjxEVdlps+dy/UCQKT/Eb0E6Kso5OayaBTOJHeDtxWjaE/Jz8V6yCkWyQHzHKKZYdTFMkO5HC3v32sLjBT9bzlzbiirPMasshpaSi+1M3CO6VWw+JY5qugiTbBcs7FHTK9umAiW2grL38UT7DiWX6+28+eo9OlBYhA45BHpwA7KdjV+Oqz8uedexN5PF63inA1RwptYfGj6xWMT/C3v9JwfC3/dk539ABEDdo7kPNQrfBfA7yyzWiSikexl7MCMdNrYNZkiQQKr0HACUjCQkr7zqSzY7uMYoLoWw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Uj0gRfTQugQaDSfdJ6vZp1vXwd4efcnCiPIG2CAsqbJOwzxZxuhDH7abNVO+h5RfkOyPOt3jIjpFQptcI5BkDxqHIY23GTY6glV6lRaNfHYNUtbFGf6Tq4nmHwkjOEK/U8FRYljkeWoBrNSowPmlYehoG4qGhBhAsVNNMvuZIns7ImZ6zlV9IBbpXDJ54zOLc6o85BN0df33MRa2owrFtbOtg03jKxx7oahtGW21AhA3LMxT08qlWAC8nQy+GSLvFS8SKcWpFAbabBVAlLCdUihcF8u5s4jmrwap2nGU6KrQ4U9P6s2Yzm1N4ZnxckSGFn/TM3nnyJ3KLG4GFEUg1g==
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Wed, 22 Sep 2021 15:48:52 +0000
  • Ironport-data: A9a23:lfYoS6LlTKwJ1CD2FE+R+5IlxSXFcZb7ZxGr2PjKsXjdYENShTJRz 2MXWW3SbvmMYWahLdt0b97nphkOsZDcyIBmTwdlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s q3yv/GZdJhcokcxIn5BC5C5xZVG/fjgqoHUVaiUZ0ideSc+EH140UM5wbZi6mJVqYPR7z2l6 IuaT/L3YDdJ6xYsWo7Dw/vewP/HlK2aVAIw5jTSV9gS1LPtvyB94KYkDbOwNxPFrrx8RYZWc QphIIaRpQs19z91Yj+sfy2SnkciGtY+NiDW4pZatjTLbrGvaUXe345iXMfwZ3u7hB3Sx/Rh0 u1prqWrdlgLGvHTkukzaj1HRnQW0a1uoNcrIFC6uM2XiUbHb2Ht07NlC0Re0Y8wo7gtRzsUr LpBdW5LPkvra+GemdpXTsF2gcsuNo/zNZ43sXB81zDJS/0hRPgvRo2XvoQDgm1t36iiG97hf ukCb2Aofi7mXCcTYmgyArUijaS30yyXnzpw9wvO+PtfD3Lo5BN1+KjgNpzSYNPibdVYmAOUq 3zL+0z9AwoGL5qPxDyd6HWui+TT2yThV+o6Fre16/pri1273XEIBVsdUl7TnBWiohfgAZQFc RVSo3dw6/hpnKC2cjXjd0bghG6ehjoHYsFvTM8etB6Hx4TtxxnMUwDoUQV9hMwaWN4eHGJxj AbZwY+xXFSDo5XOFinMre78QSeafHFPdD5cP3dsoR4tvoG7yLzfmC4jWTqK/ESdtdTzBTi46 DSDtiFWa1473JNTivnTEbwqhVuRSnn1ouwdvV6/soGNtFoRiGuZi2uAswOz0Bq4BNzFJmRtR VBd8yRk0AzrMX1qvHfXKNjh4Znzv6rVWNEiqQc3QvHNCAhBC1b8JNsNsVmS1W9CM9oeeC+BX aMgkVoKv/du0I+RRfYvOeqZUp1ypYC5TIiNfq2EP7JmP8kqHCfarX4GWKJl9z20+KTaufpkY snznAfFJStyNJmLOxLsFr9Bjud0ln5hrY4RLLiipymaPXOlTCf9YZ8OMUeUb/B/66WBoQ7P9 M1YOdfMwBJaONASqAGOmWLKBVxVf3U9G77srMlbKryKLgZ8QTlzAP7N27IxPYdimv0NxOvP+ 3i8XG5eyUb+2iKbeVnbNCg7ZeO9R4t7oFI6ITcoYQSi1U88bNv996wYbZY2I+UqrbQx0f5uQ vAZUMycGfATGC/f8jEQYMCl/oxvfRimnyyUOC+hbGRtdpJsXVWRqNTlYhHu5G8FCS/u7Zkyp Lip1wX6R5sfRls9UJaKOaz3l17o5CoTguN/WUfMM+J/QkS0/dg4MTH1g982P9oIdUfJyAyF2 lvEGhwfv+TM/dM4qYGbmaCeoo61OOJiBU4GTXLD5LO7OCSGrGquxYhMDLSBcTzHDT6m/ayjY aNezu3mMe1Bl1FP6tIuH7FuxKM4xt3uu74FkVg0QCSVNwymWuF6P32L/chTrakclLZWtDy/V l+L5tQHa66CP9noEQJJKQcoBghZOSr4RtUGASwJHXjH
  • Ironport-hdrordr: A9a23:ZH4kga1qTGYW7X4v5FXqxQqjBLwkLtp133Aq2lEZdPU1SKClfq WV98jzuiWatN98Yh8dcLK7WJVoMEm8yXcd2+B4V9qftWLdyQiVxe9ZnO7f6gylNyri9vNMkY dMGpIObOEY1GIK7/rH3A==
  • Ironport-sdr: PpjQHeQ76o7inVIpTL1cXonMl+doGrHuu3dnq0Ou7DcqJXlaibLMHWxivaQAoTH4A07gA9btq6 DOzm8SEuzC87bT7TijHo4gdGJCyVk8zLtHyfGNRUULc5zHMnRP1kCFEdAR3MSaRO6z9ex3VyKV MU3itc9PUxn7SZeThms7PDZXE0JM9JgGQ974b4ynCqTRqjEVUMPHLa4BPFsX+jZ9RwdotLdh1P hDWBKtMrgLBzTfOfKXdkKcm6U1++/qaayULaMQ76E3buv60FCr6MYkHMUY9AvBDxcA670n3w6e v+Nug5TQlFEIsSCMM4xmXgv1
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Sep 21, 2021 at 09:19:06AM +0200, Jan Beulich wrote:
> vcpu_show_registers() didn't do anything for HVM so far. Note though
> that some extra hackery is needed for VMX - see the code comment.
> 
> Note further that the show_guest_stack() invocation is left alone here:
> While strictly speaking guest_kernel_mode() should be predicated by a
> PV / !HVM check, show_guest_stack() itself will bail immediately for
> HVM.
> 
> While there and despite not being PVH-specific, take the opportunity and
> filter offline vCPU-s: There's not really any register state associated
> with them, so avoid spamming the log with useless information while
> still leaving an indication of the fact.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> I was pondering whether to also have the VMCS/VMCB dumped for every
> vCPU, to present full state. The downside is that for larger systems
> this would be a lot of output.

At least for Intel there's already a debug key to dump VMCS, so I'm
unsure it's worth dumping it here also, as a user can get the
information elsewhere (that's what I've always used to debug PVH
TBH).

> ---
> v2: New.
> 
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -631,6 +631,12 @@ void vcpu_show_execution_state(struct vc
>  {
>      unsigned long flags;
>  
> +    if ( test_bit(_VPF_down, &v->pause_flags) )
> +    {
> +        printk("*** %pv is offline ***\n", v);
> +        return;
> +    }
> +
>      printk("*** Dumping Dom%d vcpu#%d state: ***\n",
>             v->domain->domain_id, v->vcpu_id);
>  
> @@ -642,6 +648,21 @@ void vcpu_show_execution_state(struct vc
>  
>      vcpu_pause(v); /* acceptably dangerous */
>  
> +#ifdef CONFIG_HVM
> +    /*
> +     * For VMX special care is needed: Reading some of the register state 
> will
> +     * require VMCS accesses. Engaging foreign VMCSes involves acquiring of a
> +     * lock, which check_lock() would object to when done from an 
> IRQs-disabled
> +     * region. Despite this being a layering violation, engage the VMCS right
> +     * here. This then also avoids doing so several times in close 
> succession.
> +     */
> +    if ( cpu_has_vmx && is_hvm_vcpu(v) )
> +    {
> +        ASSERT(!in_irq());
> +        vmx_vmcs_enter(v);
> +    }
> +#endif
> +
>      /* Prevent interleaving of output. */
>      flags = console_lock_recursive_irqsave();
>  
> @@ -651,6 +672,11 @@ void vcpu_show_execution_state(struct vc
>  
>      console_unlock_recursive_irqrestore(flags);
>  
> +#ifdef CONFIG_HVM
> +    if ( cpu_has_vmx && is_hvm_vcpu(v) )
> +        vmx_vmcs_exit(v);
> +#endif
> +
>      vcpu_unpause(v);
>  }
>  
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -49,6 +49,39 @@ static void read_registers(struct cpu_us
>      crs[7] = read_gs_shadow();
>  }
>  
> +static void get_hvm_registers(struct vcpu *v, struct cpu_user_regs *regs,
> +                              unsigned long crs[8])

Would this better be placed in hvm.c now that it's a HVM only
function?

> +{
> +    struct segment_register sreg;
> +
> +    crs[0] = v->arch.hvm.guest_cr[0];
> +    crs[2] = v->arch.hvm.guest_cr[2];
> +    crs[3] = v->arch.hvm.guest_cr[3];
> +    crs[4] = v->arch.hvm.guest_cr[4];
> +
> +    hvm_get_segment_register(v, x86_seg_cs, &sreg);
> +    regs->cs = sreg.sel;
> +
> +    hvm_get_segment_register(v, x86_seg_ds, &sreg);
> +    regs->ds = sreg.sel;
> +
> +    hvm_get_segment_register(v, x86_seg_es, &sreg);
> +    regs->es = sreg.sel;
> +
> +    hvm_get_segment_register(v, x86_seg_fs, &sreg);
> +    regs->fs = sreg.sel;
> +    crs[5] = sreg.base;
> +
> +    hvm_get_segment_register(v, x86_seg_gs, &sreg);
> +    regs->gs = sreg.sel;
> +    crs[6] = sreg.base;
> +
> +    hvm_get_segment_register(v, x86_seg_ss, &sreg);
> +    regs->ss = sreg.sel;
> +
> +    crs[7] = hvm_get_shadow_gs_base(v);
> +}
> +
>  static void _show_registers(
>      const struct cpu_user_regs *regs, unsigned long crs[8],
>      enum context context, const struct vcpu *v)
> @@ -99,27 +132,8 @@ void show_registers(const struct cpu_use
>  
>      if ( guest_mode(regs) && is_hvm_vcpu(v) )
>      {
> -        struct segment_register sreg;
> +        get_hvm_registers(v, &fault_regs, fault_crs);
>          context = CTXT_hvm_guest;
> -        fault_crs[0] = v->arch.hvm.guest_cr[0];
> -        fault_crs[2] = v->arch.hvm.guest_cr[2];
> -        fault_crs[3] = v->arch.hvm.guest_cr[3];
> -        fault_crs[4] = v->arch.hvm.guest_cr[4];
> -        hvm_get_segment_register(v, x86_seg_cs, &sreg);
> -        fault_regs.cs = sreg.sel;
> -        hvm_get_segment_register(v, x86_seg_ds, &sreg);
> -        fault_regs.ds = sreg.sel;
> -        hvm_get_segment_register(v, x86_seg_es, &sreg);
> -        fault_regs.es = sreg.sel;
> -        hvm_get_segment_register(v, x86_seg_fs, &sreg);
> -        fault_regs.fs = sreg.sel;
> -        fault_crs[5] = sreg.base;
> -        hvm_get_segment_register(v, x86_seg_gs, &sreg);
> -        fault_regs.gs = sreg.sel;
> -        fault_crs[6] = sreg.base;
> -        hvm_get_segment_register(v, x86_seg_ss, &sreg);
> -        fault_regs.ss = sreg.sel;
> -        fault_crs[7] = hvm_get_shadow_gs_base(v);
>      }
>      else
>      {
> @@ -159,24 +173,35 @@ void show_registers(const struct cpu_use
>  void vcpu_show_registers(const struct vcpu *v)
>  {
>      const struct cpu_user_regs *regs = &v->arch.user_regs;
> -    bool kernel = guest_kernel_mode(v, regs);
> +    struct cpu_user_regs aux_regs;
> +    enum context context;
>      unsigned long crs[8];
>  
> -    /* Only handle PV guests for now */
> -    if ( !is_pv_vcpu(v) )
> -        return;
> -
> -    crs[0] = v->arch.pv.ctrlreg[0];
> -    crs[2] = arch_get_cr2(v);
> -    crs[3] = pagetable_get_paddr(kernel ?
> -                                 v->arch.guest_table :
> -                                 v->arch.guest_table_user);
> -    crs[4] = v->arch.pv.ctrlreg[4];
> -    crs[5] = v->arch.pv.fs_base;
> -    crs[6 + !kernel] = v->arch.pv.gs_base_kernel;
> -    crs[7 - !kernel] = v->arch.pv.gs_base_user;
> +    if ( is_hvm_vcpu(v) )
> +    {
> +        aux_regs = *regs;
> +        get_hvm_registers(v->domain->vcpu[v->vcpu_id], &aux_regs, crs);

I wonder if you could load the values directly into v->arch.user_regs,
but maybe that would taint some other info already there. I certainly
haven't looked closely.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.