[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 5/9] x86/PVH: actually show Dom0's register state from debug key '0'


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 23 Sep 2021 16:27:12 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=axzB9TFmmredPYZ3kjovrzEr0FcHwwllGcngRBrn7bs=; b=RzmJfZbDFRyt0l4kGBKDYXPF7yiyRZxuNuDrdq0INQMsJukrc4OZ54A3j3ylH9B7wqMwLn3qqTT2rXBkdbH5fQFoai7njNenYPVOiD3q12afSgHE9+h/1dF36Aa8w9EsXTH5ZUnxgs2E/RRWo2jb2FyXdTx+uu6No/Ee4TdT3riTF3Sqw5CyVojfEXb+fQ3Mm/QYVS2Y5xrnRJDSe8J9EiPiaSDk3Zdb7kM+5C1IIEneGKBIkVXsBfvu2NACf2AVPKXjJIHPtozwU/cMXsCnkW1yY2es6jDb1hDWot0OZFJiHxX4AQ2fZAVo8GVpa/l+zTPZPDgA7jTf+VkV4d79Lw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bI2YVqn0Lu+5mKNGVhG5xyPiU95j85s3Hl6VihprBE9a8UBtt9+qn4ycJjIu7Cwaf67vUYBtE0zXQ/rnMIAaQKgf1P+0ik3Q5q2JFnA2pdRnmT7X8be3UwHDUJRY3BGhqNiSeTUbXxWkqAufm+w5fGqwjDcfMv1HQyV5AaRcolm6Vpb5wM0G9BZnejA1XhvdfyvEhnSPzXvHGwPJ4ocCRwO349X63e+plu1rkkFe5YFizrJe/ivWgRCyjaya+g7363Os3MMO0Meix4nr6y+NtlvnUVNX7idgSDM41HlROgOmO5kl2WI6bFKaUsYFFOGhrZP+nEbWNE7sz1Qmbs6iIQ==
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Thu, 23 Sep 2021 14:27:58 +0000
  • Ironport-data: A9a23:IXzjb6mrqkDQBihFY2AH/Q7o5gxYIURdPkR7XQ2eYbSJt1+Wr1Gzt xJOC2zUbPneYmGnfNkkbtuz80gG7MPQmoA2Tws+/ntnQyMWpZLJC+rCIxarNUt+DCFioGGLT Sk6QoOdRCzhZiaE/n9BClVlxJVF/fngqoDUUYYoAQgsA185IMsdoUg7wbdh09Qz2YHR7z6l4 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4 JJdhIe+e1sMBLDFs8U/eR56HShGYYQTrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq 6ZecmpUKEne2aTmm9pXScE17ignBNPsM44F/Glp0BnSDOo8QICFSKLPjTNd9Gpr1p4UTK2OD yYfQX1IaBHsejBwBkk0Aqgzl/rvmXejWQQN/Tp5ooJoujOOnWSdyoPFL979atGMA8JPkS6wt m/Aumj0HBweHNie0iaetGKhgPfVmiH2U55UE6e3ntZoilCOwm0YCDUNSEC25/K+jyaDt8l3c hJOvHB09O5rqRLtHoKVswCETGCs+Q4tQNRiT8gA9Qim64j+5CSXD1BYZ2sUADA5j/MeSTsv3 16PutrmAz1zrbGYIU6gGqeoQSCaYndOcDVcDcMQZU5cuYCy/d1q5v7aZos7SMaIYsvJ9SYcK txghBM3gakaxeUP3r+ylbwsq2Px/sWRJuLZCwO+Y45E0u+bTNL+D2BLwQKChRqlEGp/ZgPa1 JTjs5LChN3i9bnXyESwrBwlRdlFHcqtPjzGmkJIFJI87Tmr8HPLVdkOu2slfx8xaZheKG6Bj KrvVeV5vsQ70JyCN/MfXm5MI55ykfiI+SrNDJg4keaikrAuLVTarUmClGab3nz3kVhErE3ME c3zTCpYNl5DUf4P5GPvH481iOZ3rghjlTK7bc2qlHyPjOvBDEN5vJ9YaTNimMhit/jayOgUm v4CX/a3J+J3CrajPXWHrdBJfTjn7xETXPjLliCeTcbaSiJOE2A9Ef7Bh7Qnfo1uhaNOkenUu Hq6XydlJJDX3yWvxdyiZi8xZbXxc4x4qH5nbyUgMUzxgyooYJq17bdZfJwyJOF1+OtmxP9yb v8EZ8TfXagfFmWZo2wQPcvnsYhvVBW3ngbSbSCrVycyIsx7TAvT9966Iga2rHsSDjC6vNcVq qG70l+JWoIKQglvVZ6EaP+mw16rk2IaneZ+AxnBLtVJIR2++4l2MS3hyPQwJphUexnEwzKb0 SeQAAsZ+raR89NkroGRiPnd/YmzEuZ4Ek5LJEXh7O67ZXvA426u4Y5cS+LULzrTY3z5pfe5b uJPwvCibPBexARWs5BxGqpAxL4l44e9vKdTywlpESmZb1mvDb88cHCK0dMW6/9Iz75d/wC3R liO6p9RPrDQYJHpF1sYJQwEaOWf1K5LxmmOvKpteEiqtjVq+LenUFlJO0jegSNQG7J5LYc5z Lpzo8UR8QG+1kInP9vuYvq4LIhQwqjsi5kai6w=
  • Ironport-hdrordr: A9a23:3Nga4qzQ6Ip36wjykFA+KrPxv+skLtp133Aq2lEZdPULSKKlfp GV88jziyWZtN9wYhEdcdDpAtjnfZr5z+8J3WB3B8bfYOCGghrTEGgG1+rfKlLbakjDH4JmpM Ndmu1FeaLN5DtB/LbHCWuDYq4dKbC8mcjC74qurAYOPHJXguNbnnxE426gYzxLrWJ9dOME/f Snl616T23KQwVoUi33PAhIY8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzyVxA0XXT9jyaortT GtqX222oyT99WAjjPM3W7a6Jpb3PPn19t4HcSJzuwYMC/lhAqEbJloH5eCoDc2iuey70tCqq iCnz4Qe+BIr1/BdGC8phXgnyHmzTYV8nfnjWSVhHPyyPaJDw4SOo5kv8Z0YxHZ400vsJVXy6 RQxV+UsJJREFfpgDn9z8KgbWAoqmOE5V4Z1cIDhX1WVoUTLJVLq5YEwU9TGJAcWArn9YEcFv V0Bs203ocXTbqjVQGdgoBT+q3pYpxqdS32BXTq+/blkgS+pUoJjXfxn6ck7zE9HJFUcegN2w 2LCNUwqFniJvVmGp6VP91xNPdfPFa9CC4kAFjiU2gPK5t3T04li6SHqondt9vaNaDh8vMJ6e L8uRVjxDYPR34=
  • Ironport-sdr: O/jMWOvezTj0FUwOymsiK7vUL7HT0kJHhAuuMOHTnN1fQjuzCDQXadvq+iFPqMWCEl6AVkO3Aw ZsDi/Oos1Wd/KvVGs3UmXIu+qNzKRLjLsySk11pvvqxwAFDLQEYWWAVCL2uMugX8CSjksfG9ts Qk1gK8Cq5lRL5p4byXHGD6Q2DkuGoNhrnZ8wafhyvvaZl5rtTFxXmoO0ID0QvyBSZELMcPywRw v4CocLvQjy/iH619UaXxGYBAibPNiDaSF5pv8SUkcjaVtGbLi39FhCac7U5SFs0JKx8478Xywp 6AeJDjuIOfe3j0EVCiH7qRJI
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Sep 23, 2021 at 12:21:42PM +0200, Jan Beulich wrote:
> On 22.09.2021 17:48, Roger Pau Monné wrote:
> > On Tue, Sep 21, 2021 at 09:19:06AM +0200, Jan Beulich wrote:
> >> vcpu_show_registers() didn't do anything for HVM so far. Note though
> >> that some extra hackery is needed for VMX - see the code comment.
> >>
> >> Note further that the show_guest_stack() invocation is left alone here:
> >> While strictly speaking guest_kernel_mode() should be predicated by a
> >> PV / !HVM check, show_guest_stack() itself will bail immediately for
> >> HVM.
> >>
> >> While there and despite not being PVH-specific, take the opportunity and
> >> filter offline vCPU-s: There's not really any register state associated
> >> with them, so avoid spamming the log with useless information while
> >> still leaving an indication of the fact.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> ---
> >> I was pondering whether to also have the VMCS/VMCB dumped for every
> >> vCPU, to present full state. The downside is that for larger systems
> >> this would be a lot of output.
> > 
> > At least for Intel there's already a debug key to dump VMCS, so I'm
> > unsure it's worth dumping it here also, as a user can get the
> > information elsewhere (that's what I've always used to debug PVH
> > TBH).
> 
> I know there is a respective debug key. That dumps _all_ VMCSes, though,
> so might be quite verbose on a big system (where Dom0's output alone
> may already be quite verbose).
> 
> >> --- a/xen/arch/x86/x86_64/traps.c
> >> +++ b/xen/arch/x86/x86_64/traps.c
> >> @@ -49,6 +49,39 @@ static void read_registers(struct cpu_us
> >>      crs[7] = read_gs_shadow();
> >>  }
> >>  
> >> +static void get_hvm_registers(struct vcpu *v, struct cpu_user_regs *regs,
> >> +                              unsigned long crs[8])
> > 
> > Would this better be placed in hvm.c now that it's a HVM only
> > function?
> 
> I was asking myself this question, but decided that the placement here
> is perhaps at least no bigger of a problem than putting it there.
> Factors played into this:
> - the specifics of the usage of the crs[8] array,
> - the fact that the PV function also lives here, not under pv/,

I think both functions should live under hvm/ and pv/ respectively.
There's nothing x86_64 specific about them in order to guarantee a
placement here.

> - the desire to keep the function static.

Well, that's obviously not possible if it's moved to a different file.

> I can certainly be talked into moving the code, but I will want to see
> convincing arguments that none of the three items above (and possible
> other ones I may have missed) are really a problem then.

As said above, my preference would be to move those to pv/ and hvm/,
but I can also live with the current arrangement.

> >> @@ -159,24 +173,35 @@ void show_registers(const struct cpu_use
> >>  void vcpu_show_registers(const struct vcpu *v)
> >>  {
> >>      const struct cpu_user_regs *regs = &v->arch.user_regs;
> 
> Please note this in addition to my response below.
> 
> >> -    bool kernel = guest_kernel_mode(v, regs);
> >> +    struct cpu_user_regs aux_regs;
> >> +    enum context context;
> >>      unsigned long crs[8];
> >>  
> >> -    /* Only handle PV guests for now */
> >> -    if ( !is_pv_vcpu(v) )
> >> -        return;
> >> -
> >> -    crs[0] = v->arch.pv.ctrlreg[0];
> >> -    crs[2] = arch_get_cr2(v);
> >> -    crs[3] = pagetable_get_paddr(kernel ?
> >> -                                 v->arch.guest_table :
> >> -                                 v->arch.guest_table_user);
> >> -    crs[4] = v->arch.pv.ctrlreg[4];
> >> -    crs[5] = v->arch.pv.fs_base;
> >> -    crs[6 + !kernel] = v->arch.pv.gs_base_kernel;
> >> -    crs[7 - !kernel] = v->arch.pv.gs_base_user;
> >> +    if ( is_hvm_vcpu(v) )
> >> +    {
> >> +        aux_regs = *regs;
> >> +        get_hvm_registers(v->domain->vcpu[v->vcpu_id], &aux_regs, crs);
> > 
> > I wonder if you could load the values directly into v->arch.user_regs,
> > but maybe that would taint some other info already there. I certainly
> > haven't looked closely.
> 
> I had it that other way first, wondering whether altering the structure
> there might be safe. It felt wrong to fiddle with the live registers,
> and the "const" above than was the final bit that convinced me I should
> go the chosen route. Yet again - I can be talked into going the route
> you outline via convincing arguments. Don't forget that we e.g.
> deliberately poison the selector values in debug builds (see
> hvm_invalidate_regs_fields()) - that poisoning would get undermined if
> we wrote directly into the structure.

The vcpu is paused at this point, but I agree should not undermine the
poisoning. I assume those fields don't get filled because it's an
expensive operation and it doesn't get used that often.

Long term it might be helpful to have something akin to
guest_cpu_user_regs that could be used on paused remote vCPUs.

Anyway, I don't really have much other comments, so:

Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.