[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH] xen/arm: domain_build: Ignore device nodes with invalid addresses
The handle_device() function has been returning failure upon encountering a device address which was invalid. A device tree which had such an entry has now been seen in the wild. As it causes no failures to simply ignore the entries, ignore them. Signed-off-by: Elliott Mitchell <ehem+xenn@xxxxxxx> --- I'm starting to suspect there are an awful lot of places in the various domain_build.c files which should simply ignore errors. This is now the second place I've encountered in 2 months where ignoring errors was the correct action. I know failing in case of error is an engineer's favorite approach, but there seem an awful lot of harmless failures causing panics. This started as the thread "[RFC PATCH] xen/arm: domain_build: Ignore empty memory bank". Now it seems clear the correct approach is to simply ignore these entries. This seems a good candidate for backport to 4.14 and certainly should be in 4.15. --- xen/arch/arm/domain_build.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 374bf655ee..c0568b7579 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -1407,9 +1407,9 @@ static int __init handle_device(struct domain *d, struct dt_device_node *dev, res = dt_device_get_address(dev, i, &addr, &size); if ( res ) { - printk(XENLOG_ERR "Unable to retrieve address %u for %s\n", - i, dt_node_full_name(dev)); - return res; + printk(XENLOG_ERR "Unable to retrieve address of %s, index %u\n", + dt_node_full_name(dev), i); + continue; } res = map_range_to_domain(dev, addr, size, &mr_data); -- 2.20.1 -- (\___(\___(\______ --=> 8-) EHM <=-- ______/)___/)___/) \BS ( | ehem+sigmsg@xxxxxxx PGP 87145445 | ) / \_CS\ | _____ -O #include <stddisclaimer.h> O- _____ | / _/ 8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |