[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 19/37] xen/x86: promote VIRTUAL_BUG_ON to ASSERT in


  • To: <wei.chen@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>, <sstabellini@xxxxxxxxxx>, <julien@xxxxxxx>
  • From: Wei Chen <wei.chen@xxxxxxx>
  • Date: Thu, 23 Sep 2021 20:02:18 +0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=ZFMSX7kfVCjWvqyxLu5Vopy2J4u9k63suSpDv1EU3iI=; b=A3gipSIHHCKy+RvocsRbvcCiAKQ7Z3Fh4jfItJV+Osbx37N8YpzCedezpB8RXsKL2PY/J/CPLlNe2r/y8kF2sMiDXQ9PTDcXjlle2TumoYrkh6kOD5q32EhQ/WqVQqoVFHgH8P+HKP14kuFer+Pm8hiCHphmfNsp1IAU5lOsGhA2wJ2hkm3ljCy2o8JWAdD7K5YBNQ6shrpd2KcZmgjuc21PNvauvM7qP/GK28wLkvU2qsYhF7C5zbS1f1fNnFFrfW2rkn6z4Jx4YlIMyrUaflWtVTYpice/s9P27umo5KEo/qX9ZEGWrXLwbGDPd1PNiqcri7P6qPRhcoSMLBY96Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VAvzdiKhJ693OkRMHlVbdKjkCvpQqnRLT6umZKZl4gzY5GPJ4judYzgsEt87afVGs7iPOZFELC1EQgkbcsD/uUSdMZg62DL9fevfwijPi8bzmP6GKx7nTkytANjuo6T8MDbT74TcqKHlq4IQQZTj5lESsSiSXF8ZdiJDysR5X1QwPJvoeofZ+GcGhKNsFuDrbtZtL+ThNl/WibyY1DiCfa9LQ60xz/7na0dgkudxZBiVcWTuiLhcfuwJrh2IDaW1ieqTeZk5TxDvOy2gkPvgixGy5s9Ih1fjWydiiHujuSHJawJgfrJG5dqkqxVfnvHXsV6ZHtFBhMcFyRTVBPXPuQ==
  • Cc: <Bertrand.Marquis@xxxxxxx>
  • Delivery-date: Thu, 23 Sep 2021 12:07:55 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true

VIRTUAL_BUG_ON that is using in phys_to_nid is an empty macro. This
results in two lines of error-checking code in phys_to_nid are not
actually working. It also covers up two compilation errors:
1. error: ‘MAX_NUMNODES’ undeclared (first use in this function).
   This is because MAX_NUMNODES is defined in xen/numa.h.
   But asm/numa.h is a dependent file of xen/numa.h, we can't
   include xen/numa.h in asm/numa.h. This error has been fixed
   after we move phys_to_nid to xen/numa.h.
2. error: wrong type argument to unary exclamation mark.
   This is becuase, the error-checking code contains !node_data[nid].
   But node_data is a data structure variable, it's not a pointer.

So, in this patch, we use ASSERT in VIRTUAL_BUG_ON to enable the two
lines of error-checking code. And fix the the left compilation errors
by replacing !node_data[nid] to !node_data[nid].node_spanned_pages.

Because when node_spanned_pages is 0, this node has no memory.
numa_scan_node will print warning message for such kind of nodes:
"Firmware Bug or mis-configured hardware?". Even Xen allows to online
such kind of nodes. I still think it's impossible for phys_to_nid to
return a no memory node for a physical address.

Signed-off-by: Wei Chen <wei.chen@xxxxxxx>
---
 xen/include/xen/numa.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h
index 51391a2440..1978e2be1b 100644
--- a/xen/include/xen/numa.h
+++ b/xen/include/xen/numa.h
@@ -38,7 +38,7 @@ struct node {
 extern int compute_hash_shift(struct node *nodes, int numnodes,
                              nodeid_t *nodeids);
 
-#define VIRTUAL_BUG_ON(x)
+#define VIRTUAL_BUG_ON(x) ASSERT(!(x))
 
 extern void numa_add_cpu(int cpu);
 extern void numa_init_array(void);
@@ -75,7 +75,7 @@ static inline __attribute__((pure)) nodeid_t 
phys_to_nid(paddr_t addr)
        nodeid_t nid;
        VIRTUAL_BUG_ON((paddr_to_pdx(addr) >> memnode_shift) >= memnodemapsize);
        nid = memnodemap[paddr_to_pdx(addr) >> memnode_shift];
-       VIRTUAL_BUG_ON(nid >= MAX_NUMNODES || !node_data[nid]);
+       VIRTUAL_BUG_ON(nid >= MAX_NUMNODES || 
!node_data[nid].node_spanned_pages);
        return nid;
 }
 
-- 
2.25.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.