[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3] libxl/PCI: Fix PV hotplug & stubdom coldplug


  • To: Jason Andryuk <jandryuk@xxxxxxxxx>
  • From: Anthony PERARD <anthony.perard@xxxxxxxxxx>
  • Date: Wed, 12 Jan 2022 17:10:36 +0000
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Wei Liu <wl@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Wed, 12 Jan 2022 17:11:04 +0000
  • Ironport-data: A9a23:u7vvM6uVsR0GibZ9z5UrP25nvufnVKVZMUV32f8akzHdYApBsoF/q tZmKTyCOveCYGP3cookOoXn9hgAu8XSytY3HQttrSo9RiMa+JbJXdiXEBz9bniYRiHhoOOLz Cm8hv3odp1coqr0/0/1WlTZQP0VOZigHtIQMsadUsxKbVIiGHdJZS5LwbZj2NYx2ILhWWthh PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ NplrpGSSCszB/b1tv0gCjBYOnxdN49W0eqSSZS/mZT7I0zudnLtx7NlDV0sPJ1e8eFyaY1M3 aVGcnZXNEnF3r/ohuLgIgVvrp1LwM3DNYUDunZm3HfBAOwvW5zrSKTW/95Imjw3g6iiGN6AP ZBENGE2NHwsZTVfZG0ZEpNkkN6MxUOleGJ0k3fFhYYotj27IAtZj+G2bYu9lsaxbdpRtlaVo CTB5WuRKhMVLtuE0hKe72mhwOTImEvTWogfCbm5/f5Cm0CIyyoYDxh+fVmyp/Wjm1O9c91aI k0QvCEpqMAa5EGtC9XwQRC8iHqFpQIHHcpdFfUg7wOAwbaS5ByWblXoVRYYNoZg7pVvA2V3i BnZxLsFGACDrpWORFCc2q2t9gqcIBVMBG8rOAwObxsstoyLTJ4IsjrDSdNqEaiQh9LzGC3tz z3ikBXSl4n/nuZQifzloAmvbyaE48GQE1Vrvlm/sneNs1shDLNJcbBE/rQyARxoCI+CBmeMs 3Ef8yR1xLBfVMrd/MBhrQhkIV1I2xpnGGGN6bKMN8N4n9hIx5JFVdoBiN2ZDB04WvvogRezP CfuVfp5vfe/xkeCY65teJ6WAM8316XmHtmNfqmKMoMfOMYvJFfYp3AGiausM4bFyhlEfUYXY 8bzTCpRJSxCVfQPIMSeGo/xLoPHNghhnDiOFPgXPjys0KaEZW79dFv2GADmUwzN14vd+F+92 48Gb6OikkwDOMWjPHW/2dNNfDgicChqbbir+pc/XrPSfWJb9JQJVqW5LUUJIdI1xsy4V47go xmAZ6Ov4AGu2i2cd1TbMyALhXGGdc8XkE/X9BcEZT6As0XPq672hEvGX5doL7Qh6sJ5yvt4E 6sMd8maW6wdQTXb4TUNK5L6qdU6JhisgAuPOQujYSQ+IME8F1CYpIe8c1u97jQKAwq2qdA6/ ++q2DTETMdRXA9lFsvXNq6ilgvjoXgHletudELUOd0PKl70+Y1nJnWp3P86Ks0BMzvZwT6e2 1rECBsUv7CV8YQ07MPIleaPqILwS7lyGU9THm/667eqNHaFojr/kNEYCOvRJGLTTmL5/qmmd N559fCkPa1VhktOvqp9D61vkfA06ezwquII1Q9jBnjKMQimU+syPnmc0MBTnaRR3bsF6xCuU 0eC99QGa7WEPMTpTAwYKAY/N7nR0PgVnn/Z7OgvIVW87yhypeLVXUJXNhiKqSpcMLoqb991n bZ/4JYbu16llx4nEtealSQFpW2DI0sJX7gjqpxHUpTgjRAmyw0abJHRYsMsDEpjtzmY3pEWH wKp
  • Ironport-hdrordr: A9a23:69f2cqN6bXnFMMBcTsWjsMiBIKoaSvp037BN7TEXdfU1SL39qy nKpp8mPHDP5Ar5NEtOpTniAsm9qBHnm6KdiLN5Vd3OYOCMggqVBbAnwYz+wyDxXw3Sn9QtsJ uIqpIOa+EY22IK7/rH3A==
  • Ironport-sdr: PnzJMG+b1eddQz0NZlkQVk8VS5VS1C9mUEhkv17v7WaFxvY7mqOcoLRIfHo1oWMCny4MO65E2C qhX5WKNxAcrA4qjsgcNXmkRA/82EGPCUILOygnOm+IscUprSOgI+zOWwp03nm7EmiBOJGULRjR 7HTVjCU9J7sLjUXuUcUfw1EtwAxkaKUPI8Hmhk/n9SQ/uu30CvMKRGnsJNRNETvbiXRmdjK2HW QysY1VKzc70tZYo8d+J/UUn7zTfV/ODia7fcpmvHW5JS7gKdYw6pl2/lp0wwtN//B6sRdi5AUf zSWjppHRyvYYEkWfB76tLQ9t
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Sun, Jan 09, 2022 at 01:04:36PM -0500, Jason Andryuk wrote:
> commit 0fdb48ffe7a1 "libxl: Make sure devices added by pci-attach are
> reflected in the config" broken PCI hotplug (xl pci-attach) for PV
> domains when it moved libxl__create_pci_backend() later in the function.
> 
> This also broke HVM + stubdom PCI passthrough coldplug.  For that, the
> PCI devices are hotplugged to a running PV stubdom, and then the QEMU
> QMP device_add commands are made to QEMU inside the stubdom.
> 
> Are running PV domain calls libxl__wait_for_backend().  With the current
> placement of libxl__create_pci_backend(), the path does not exist and
> the call immediately fails:
> libxl: error: libxl_device.c:1388:libxl__wait_for_backend: Backend 
> /local/domain/0/backend/pci/43/0 does not exist
> libxl: error: libxl_pci.c:1764:device_pci_add_done: Domain 
> 42:libxl__device_pci_add failed for PCI device 0:2:0.0 (rc -3)
> libxl: error: libxl_create.c:1857:domcreate_attach_devices: Domain 42:unable 
> to add pci devices
> 
> The wait is only relevant when the backend is already present.  num_devs
> is already used to determine if the backend needs to be created.  Re-use
> num_devs to determine if the backend wait is necessary.  The wait is
> necessary to avoid racing with another PCI attachment reconfiguring the
> front/back or changing to some other state like closing. If we are
> creating the backend, then we don't have to worry about the state since
> it is being created.
> 
> Fixes: 0fdb48ffe7a1 ("libxl: Make sure devices added by pci-attach are
> reflected in the config")
> 
> Signed-off-by: Jason Andryuk <jandryuk@xxxxxxxxx>
> ---
> Alternative to Jan's patch:
> https://lore.kernel.org/xen-devel/5114ae87-bc0e-3d58-e16e-6d9d2fee0801@xxxxxxxx/
> 
> v3:
> Change title & commit message
> 
> v2:
> https://lore.kernel.org/xen-devel/20210812005700.3159-1-jandryuk@xxxxxxxxx/
> Add Fixes
> Expand num_devs use in commit message
> ---
>  tools/libs/light/libxl_pci.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
> index 4c2d7aeefb..e8fd3bd937 100644
> --- a/tools/libs/light/libxl_pci.c
> +++ b/tools/libs/light/libxl_pci.c
> @@ -157,8 +157,10 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
>      if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
>          return ERROR_FAIL;
>  
> -    if (!starting && domtype == LIBXL_DOMAIN_TYPE_PV) {
> -        if (libxl__wait_for_backend(gc, be_path, GCSPRINTF("%d", 
> XenbusStateConnected)) < 0)
> +    /* wait is only needed if the backend already exists (num_devs != NULL) 
> */
> +    if (num_devs && !starting && domtype == LIBXL_DOMAIN_TYPE_PV) {

It's kind of weird to check whether the "num_devs" key is present or
not, but I guess that kind of work.

> +        if (libxl__wait_for_backend(gc, be_path,
> +                                    GCSPRINTF("%d", XenbusStateConnected)) < 
> 0)

So there is something in the coding style document, in "error handling",
about how to write this condition.
Could you store the return value of libxl__wait_for_backend() into "rc"
(because it's a libxl function), then check it and return it?

    if (rc) return rc;

Thanks,

-- 
Anthony PERARD



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.