|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v2] xen: gic-v3: Introduce CONFIG_GICV3_NR_LRS
One key requirement of Xen functional safety is to reduce the number
of lines of code to be safety certified. Besides, a safety certified
Xen requires a static hardware configuration to be defined. This static
hardware configuration is described as per the test hardware/emulator
hardware configuration against which Xen is verified.
Introduce GICV3_NR_LRS with the two aims in mind:
1. User should set the number of GICV3 list registers as per the test
hardware so that the unwanted code can be removed using GCC's dead
code elimination or preprocessor's config.
2. By doing #1, one can ensure that there is no untested code due to
unsupported hardware platform and thus there is no safety impact due
to untested code.
However if the user does not set GICV3_NR_LRS, then it is set to 0.
Thus Xen will fallback to the default scenario (i.e. read the hardware
register to determine the number of LRS).
1. In gicv3_save_lrs()/gicv3_restore_lrs(), use the number of list
registers from GICV3_NR_LRS (if defined) instead of gicv3_info.nr_lrs.
This ensures that if the hardware does not support more than 4 LRs
(for example), the code accessing LR 4-15 is never reached. The
compiler can eliminate the unsupported cases as the switch case uses a
constant conditional.
2. RAZ/WI for the unsupported LRs.
Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@xxxxxxx>
Signed-off-by: Michal Orzel <michal.orzel@xxxxxxx>
---
Changelog:
v1 - 1. s/lrs/LRS
2. Implement RAZ/WI instead of panic
Few comments which were not addressed
1. Do "gicv3_info.nr_lrs to LRS" in gicv3_hyp_init() and keep the code
unchanged in gicv3_save_lrs()/gicv3_restore_lrs() -- This prevents the
compiler from doing dead code elimination as the switch condition cannot
be evaluated at compile time.
I am not sure how to get around this issue.
xen/arch/arm/Kconfig | 9 +++++++++
xen/arch/arm/gic-v3.c | 14 ++++++++++++--
2 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 2f2b501fda..6540013f97 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -276,6 +276,15 @@ config PCI_PASSTHROUGH
endmenu
+config GICV3_NR_LRS
+ int "Number of GICv3 Link Registers supported" if EXPERT
+ depends on GICV3
+ range 0 16
+ default 0
+ help
+ Controls the number of Link registers to be accessed.
+ Keep it set to 0 to use a value obtained from a hardware register.
+
menu "ARM errata workaround via the alternative framework"
depends on HAS_ALTERNATIVE
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index bc07f97c16..eaae95eb4d 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -51,6 +51,8 @@ static DEFINE_PER_CPU(void __iomem*, rbase);
#define GICD (gicv3.map_dbase)
#define GICD_RDIST_BASE (this_cpu(rbase))
#define GICD_RDIST_SGI_BASE (GICD_RDIST_BASE + SZ_64K)
+#define LRS (CONFIG_GICV3_NR_LRS ?: \
+ gicv3_info.nr_lrs)
/*
* Saves all 16(Max) LR registers. Though number of LRs implemented
@@ -59,7 +61,7 @@ static DEFINE_PER_CPU(void __iomem*, rbase);
static inline void gicv3_save_lrs(struct vcpu *v)
{
/* Fall through for all the cases */
- switch ( gicv3_info.nr_lrs )
+ switch ( LRS )
{
case 16:
v->arch.gic.v3.lr[15] = READ_SYSREG_LR(15);
@@ -121,7 +123,7 @@ static inline void gicv3_save_lrs(struct vcpu *v)
static inline void gicv3_restore_lrs(const struct vcpu *v)
{
/* Fall through for all the cases */
- switch ( gicv3_info.nr_lrs )
+ switch ( LRS )
{
case 16:
WRITE_SYSREG_LR(v->arch.gic.v3.lr[15], 15);
@@ -178,6 +180,10 @@ static inline void gicv3_restore_lrs(const struct vcpu *v)
static uint64_t gicv3_ich_read_lr(int lr)
{
+ /* RAZ for unsupported LR */
+ if ( lr >= LRS )
+ return 0;
+
switch ( lr )
{
case 0: return READ_SYSREG_LR(0);
@@ -203,6 +209,10 @@ static uint64_t gicv3_ich_read_lr(int lr)
static void gicv3_ich_write_lr(int lr, uint64_t val)
{
+ /* WI for unsupported LR */
+ if ( lr >= LRS )
+ return;
+
switch ( lr )
{
case 0:
--
2.25.1
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |