[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: infinite loop in xenstat_qmp.c


  • To: "Reiser, Hans" <hr@xxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 22 Jan 2021 13:25:56 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YTKCD9Sl6noBKxs3M5+kG1C/F6iEkr+E3DKmBucsXOA=; b=Jr06wRXxtbnK8o1E3Mo89I/aMvwsgu/deFMQ4ZnEDGo09IVEypXGOSzugXzylKDOS0YwW++GJpa3FgU8JrLfWm0bQBD4LcMLkhosZFJo64OjzRZXwUaNw8WcXREHGNQAdX5anx4u73cqOX9c/G8Wr76yaQ+IaS8guB2OInu6IHn1jnwEiAOKzLBN91lagiZwnKiYTY2VLPHSdRvmWFBaklzlfyzSSc5fHgXQoz0SFyNKPbhQO08NsfCwUkFSGPc605yuOX9eJ9EcE29RZiVdbTe2mIVf2tIlXi01DqnpOzrWlImMvpM41L+4VC8VutG54ugVNj4510cTMQ3SAflYBg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GrVZqLXKorg7PS6UPbnuXlyA5iMcHy9mR5/Twl/TXTnyLRD2GYDUg1inqBIQCQxABV8J1x83TcR4mFEUB/01sBJiWg3gi/leo577nxtXyg3dp9OCNuxcfQJtDlMbB7O3w68XvD3LMuWL8zhCBUfmMeVFgZ71FOSVxTratQocl4QDe/JWNP80TQ4Y33un/76Qg2E28tvwxnr5KrpE9ES01ft8GaQtXYOvfQnMFNwf94Khm0ozqUZfkV4RTwJOtfQ8Ozs2FNOc+ehRzkmaa+Lgo0lpGcWOUBYXTDTJf+ty6RGnLW+1ubYH/LkHwyVQrxSnTSK7on0KUGYVH0JuGmZ7dw==
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Delivery-date: Fri, 22 Jan 2021 13:26:14 +0000
  • Ironport-sdr: PLircvBKzmbJFx1E4FzpB2GEfm+89/KN/pqHg9B5Rt5hDv6vwZk2gi0eETtv/TP2BbISxeT47G YZuQN36PF1ViDDbG41Iqfw7FRvowAloajcZgTdC64VElIBoO4/VpYI1EViqp2vnPzgaHvA110J 1OGjrnCz0o90fZEnSt3FcohsS7ecLnl5dPoXmOjlpAmn5RJjBWpyVzZls6W0pOzM+VHKmN9c9q Cgx7uxaoVkDLnlFX2PX01GO9uTUcZqfb3isHy+6Cjb6erfVoH2R30YSdNVkHO2AwkeekCBcaGv HOA=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 09/11/2020 14:36, Reiser, Hans wrote:
> Hi,
>
> I have seen several occasions with "dead" xentop processes consuming 100% CPU 
> time, and tracked this down
> to the following problem:
>
> When the QEMU process the qmp_read function is communicating with terminates, 
> qmp_read may enter an
> infinite loop:  poll signals EOF (POLLIN and POLLHUP set), the subsequent 
> read() call returns 0, and then the
> function calls poll again, which still sees the EOF condition and will return 
> again immediately with POLLIN and
> POLLHUP set, repeating ad infinitum.
>
> A simple fix is to terminate the loop when read returns 0 (under "normal" 
> instances, poll will return with POLLIN
> set only if there is data to read, so read will always read >0 bytes, except 
> if the socket has been closed).
>
> Cheers, Hans

Hi - this appears to have slipped through the cracks.

Thanks for the bugfix, but we require code submissions to conform to the
DCO[1] and have a Signed-off-by line.

If you're happy to agree to that, I can fix up the patch and get it
sorted.  We've moved this library in the time since you submitted the
bugfix.

Thanks, and sorry for the delay.

~Andrew

[1]
https://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches#Signed-off-by

>
> diff --git a/tools/xenstat/libxenstat/src/xenstat_qmp.c 
> b/tools/xenstat/libxenstat/src/xenstat_qmp.c
> index 19b236e7b6..0c5748ba68 100644
> --- a/tools/xenstat/libxenstat/src/xenstat_qmp.c
> +++ b/tools/xenstat/libxenstat/src/xenstat_qmp.c
> @@ -298,7 +298,7 @@ static int qmp_read(int qfd, unsigned char **qstats)
>         pfd[0].events = POLLIN;
>         while ((n = poll(pfd, 1, 10)) > 0) {
>                 if (pfd[0].revents & POLLIN) {
> -                       if ((n = read(qfd, buf, sizeof(buf))) < 0) {
> +                       if ((n = read(qfd, buf, sizeof(buf))) <= 0) {
>                                 free(*qstats);
>                                 return 0;
>                         }
>
>




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.