[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/debug: fix page-overflow bug in dbg_rw_guest_mem

  • To: Jan Beulich <jbeulich@xxxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Mon, 1 Feb 2021 11:44:23 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1XeDDXN3b8M4c3Yv0nWTnGkViWyT0skuZLqvFA+Wkgk=; b=ItjCNjy6Ehj9eHTrfwVBxqRF2RKYnaLgNFOtaqAxOesjsq56j5k1yG/sD7L+6PWFfEXBvPDygoaC4xsY9HD77ZjQyyHz5g7DvvOBK4lLrmOcIwvIygNPvDoFQnDbNnKB4/lkiCdNK02HRxk/HqHlOjdtUr/EvZJitBrqojlzFBTFBOc1jHGcbw+H2MOd4zHqkL2K718HYaEWTlDm1sTqQKggWiEJndAToaLODXYdlhkJlrUAOX++SUvrlv8inuhIcgD72k/BcvyxLL6Tu76xuKtZaPGpU/IPjpqJtXsKHtC0ylVCWhIraaWk0Fprwz1lXT3/lzGVaQd652r9VIEqJQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NAfBeRAMMb4HfZASJj3yf/KcUtwyemr9LxvBE6sVwd+VwhQb2TBIwPCsvRfGTycY8yGEMwWKrxa5ipb1WsRp1e9M11tAdZGT4HslR5I/c/SG0RXX2TvFzRhnRrCxB2UnibQI0CdaMhIp+zH1CDM9mASfdmX2MBhUbtUYpT7VY/cbB8tf2h4/a2sxeU+dgeIKClf38aqxdReWhGUq9lc9dDh0HQZaNHyG2n0/eC31G2Yt9zDJqdgbQotHbFiIO8V5PwIRnyI9VMJj5jzAa9ihwayGqcO41HthZBSzirOMm0Knh7UcTEVZ/CbHKGg7mwJYY/0PuHjbX7MZyFIyaVRSyQ==
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Elena Ufimtseva <elena.ufimtseva@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 01 Feb 2021 11:44:47 +0000
  • Ironport-sdr: IRsV8HB/g9BkcdiSK0fXUJyaaqJtJvPuP4ne1wrbQc8Bpcfy/Y33tKLQANZ/V+yBXZRXqjzME+ pyCfEfau+lhiVd4IlDv7QXKvtjIsihiXwI1DnKsG0tgsNWQsEbjfswyBLg9PAkzc7FPf1+EYzN z4kLVlO4MIBUlczYseJr0prv27VqY6qge/Dn3/AZr4id37FPrCEtYCqxxAjjL08wkcHuqKTzDb DPQc3s6k6AUIA/5cPD5GGuaEn5icWmvJrdtvw+issQt0GurOLTiZ6R5uuhGSvQy1ScepIf2Eiy iI8=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 01/02/2021 09:37, Jan Beulich wrote:
> On 30.01.2021 03:59, Andrew Cooper wrote:
>> On 30/01/2021 01:59, Tamas K Lengyel wrote:
>>> When using gdbsx dbg_rw_guest_mem is used to read/write guest memory. When 
>>> the
>>> buffer being accessed is on a page-boundary, the next page needs to be 
>>> grabbed
>>> to access the correct memory for the buffer's overflown parts. While
>>> dbg_rw_guest_mem has logic to handle that, it broke with 229492e210a. 
>>> Instead
>>> of grabbing the next page the code right now is looping back to the
>>> start of the first page. This results in errors like the following while 
>>> trying
>>> to use gdb with Linux' lx-dmesg:
>>> [    0.114457] PM: hibernation: Registered nosave memory: [mem
>>> 0xfdfff000-0xffffffff]
>>> [    0.114460] [mem 0x90000000-0xfbffffff] available for PCI demem 0
>>> [    0.114462] f]f]
>>> Python Exception <class 'ValueError'> embedded null character:
>>> Error occurred in Python: embedded null character
>>> Fixing this bug by taking the variable assignment outside the loop.
>>> Signed-off-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
>> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> I have to admit that I'm irritated: On January 14th I did submit
> a patch ('x86/gdbsx: convert "user" to "guest" accesses') fixing this
> as a side effect. I understand that one was taking care of more
> issues here, but shouldn't that be preferred? Re-basing isn't going
> to be overly difficult, but anyway.

I'm sorry.  That was sent during the period where I had no email access
(hence I wasn't aware of it - I've been focusing on 4.15 work and this
series wasn't pinged.), but it also isn't identified as a bugfix, or
suitable for backporting in that form.

I apologise for the extra work caused unintentionally, but I think this
is the correct way around WRT backports, is it not?




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.