[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 02/11] vpci: Add hooks for PCI device assign/de-assign


  • To: Oleksandr Andrushchenko <andr2000@xxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 13 Oct 2021 13:29:55 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TA/FmPNRk0cZoku0olp3BhDSjnZ2hhtSB3jGX+fnJq0=; b=oZ7fcSPVel/1Uann8c3UuPT9LrX5WA/o7p2aNUxT+8SATd5AD5igfbwDipm6kpdDE1vKAg19cyYmZe0E/cRfYCvBpPidYuFWlECpQOoO6ibSZCb0pbcOOFQ0zXTMOcNSQzhmrBOI3C5jfMuLPyXxaJbxnP3LKg2mZYNa2t7hUDw5Jr36jzdEYL0QEJ1hY+tWxUkaJgMvxbLZQosfRhGoqv/eLtIDlh0ANNH+sAT5CgTFAm7q+BoUYgh20eEAgaAo6pH7w1zpAwSkhTJ1HeHT9gLU8c/BD9dmzPY8YBIb3gCCqkJaUfkhW0bAoNYiwkhzIjVAcC/+EU8WzDDNOCut1Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bH0uIt9Hzp73WU3OlxPwDSI5uxWYFIBOiXP+0c1rbotHMVcjBVq2jAGXPhPmUg4ZsBxMilWvMtQXb4DgjSY7XzbKN9cvE/piLvV1DfD443sd5EdIzYHm/HoiQ1T+LxDffZomyvZab8/L54EIIbjR88SJ8BTpVqLnfT6Xk4OQpf3aUQohcAmV0k4N4qtiySjct5VhVmCLyf912USqUCfNtrgyncREyaeB8u40U6JG2lSlmTCMvzxeasR4dUYoCWnJq8b7dQwFTO+Au0pkFKthiyiugvZYEb6PX8OIjuLUGKOfjfff4xThBysBR+fHdem4FU8TcFkmrobICpC3LKSiuQ==
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, <julien@xxxxxxx>, <sstabellini@xxxxxxxxxx>, <oleksandr_tyshchenko@xxxxxxxx>, <volodymyr_babchuk@xxxxxxxx>, <Artem_Mygaiev@xxxxxxxx>, <jbeulich@xxxxxxxx>, <bertrand.marquis@xxxxxxx>, <rahul.singh@xxxxxxx>, Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
  • Delivery-date: Wed, 13 Oct 2021 11:30:28 +0000
  • Ironport-data: A9a23:+YSsh6wmWtlzEcfpu4p6t+cAwCrEfRIJ4+MujC+fZmUNrF6WrkUHm DNJX2iFa/6PY2ShcoxwPNuy9RsHusPTxtJrGwY/pCAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx 59DAjUVBJlsFhcwnvopW1TYhSEUOZugH9IQM8aZfHAsLeNYYH1500s6w7di2tcAbeWRWGthh /uj+6UzB3f9s9JEGjp8B3Wr8U4HUFza4Vv0j3RmDRx5lAa2e0o9VfrzEZqZPXrgKrS4K8bhL wr1IBNVyUuCl/slIovNfr8W6STmSJaKVeSFoiI+t6RPHnGuD8H9u0o2HKN0VKtZt9mGt/9d5 vVOn6a/cxl3Gq7Pkr8WaV5AOj4raMWq+JefSZS+mcmazkmAeHrw2fR+SkoxOOX0+M4uXzsIr 6ZBbmlQMFbT3Ipaw5riIgVort4kI8TxepsWp1lrzC3DDOZgSpfGK0nPzYIGjWps25EfdRrYT +swVzFzcj3cXztoMW00U7QgxMOqqFCqJlW0r3rK/PFqsgA/1jdZ0rLgLd7UcdyiXthOkwCTo Weu103jHhwfA/mOxjOE/2yEi/fGmGXwX4d6PLe17OJwiVufgGkaEgQLVECTqOO8zEW5Xrp3L EgZ+TEnq6Qow0WtQsPgRB2zoHOCvRk0VsJZFqsx7wTl4rDd4xudQHMFSDFBQNU8sYk9QjlC/ kGOm9TlFDl+qoqfQHiW9qqXhT6qMC1TJmgHDQcUQA1A79T9rYUbihPUUs0lAKOzlsfyGzz73 3aNtidWr7wVgdRRj/3j1V/CijOo4JPOS2Yd5BjTX2+jxhN0YsiifYPAwV/f4PVbJYCVVG6dr WMEkMiT6uMJJZyVnSnLS+IIdJmy/OqMOjDYhV9pHrEi+i6r9nrleppfiAySP28wbJxCI2WwJ haO50UBv/e/IUdGc4dYT6yRCO4QwpL+S/DmVfqPZeAJerFuIVrvED5VWWac2GXkkU4JmK45O IuGfcvEMUv2GZiL3xLtGL9Die5DKjQWgDqJH8iinkvPPa+2PSbNEd843E2ygvfVBU9uiD7e9 MpDLIO0whFbXfyWjsL/oNNLcw5iwZTWA/nLRy1rmgyrflQO9IIJUaa5LVYdl2pNxPs9egDgp CDVZ6Og4ACj7UAr0C3TApyZVJvhXIxksVUwNjE2MFCj1hALONj0sP9BK8FvJ+V8qISPKMKYq dFeJK1s5dwVG1z6F8k1N8Gh/OSOijz67e5xA8ZVSGdmJMMxL+A40tTlYhHu5EEz4tmf7qMDT 0mb/lqDG/IrHl06ZO6PMa7H5w7h7BA1xbMpN2OVc4Y7RakZ2NUzQ8AHpqRseJ9kxNSq7mby6 jt69j9B/bOT/NRvrIWQ7U1Gxq/we9ZD8oNhNzCzxZ69NDXA/3rlxolFUe2SeivaWn+y86KnD di5BdmnWBHetFoV4Yd6DZhxyqcyu4nmq7NAl1w2F3TXdVW7TLhnJyDej8VIs6RMwJ5fuBe3B R3TqoULZ+3RNZO3CkMVKSokcv+HiaMelA7N4KlnO079/iJ2ouaKCB0AIxmWhSVBB7JpK4d5k /w5scsb5lXn2BonO9qLlA5O8GGIIiBSWqkrrMhCUoTqlhAq2hdJZpmFUn3655SGatNtNEg2I 2DL2Pqe1uoEnkebKig9D3nA2+ZZlK8ighESwQ9QPUmNl/rEmuQzgE9b/wMoQ1kH1R5Aye9yZ DRmbhUnOaWU8j50r8FfRGTwSRpZDRiU90GtmVsEkGrVExuhWmDXdTBvPO+M+AYS8n5Gfygd9 7adkT63XTHvdcD3/y0zRU869KCzEY0vrlXPyJK9AsCIP5gmej600KahaF0BpwbjHc5s1lbMo vNn/booZKD2XcLKT3bX12VOOWwsdS25
  • Ironport-hdrordr: A9a23:rRUdoKtTgif8e4Ucrf1zJbJt7skCkoMji2hC6mlwRA09TyXGra +TdaUguSMc1gx9ZJhBo7G90KnpewK6yXdQ2/hqAV7CZnichILMFu9fBOTZsl/d8kHFh4tgPO JbAtVD4b7LfCZHZKTBkXCF+r8bqbHtmsDY5pau854ud3ATV0gJ1XYHNu/xKDwReOApP+tcKH LKjfA32wZINE5nJfiTNz0gZazuttfLnJXpbVovAAMm0hCHiXeN5KThGxaV8x8CW3cXqI1SvF Ttokjc3OGOovu7whjT2yv66IlXosLozp9mCNaXgsYYBz3wgkKDZZhnWZeFoDcpydvfomoCoZ 3pmVMNLs5z43TeciWcpgbs4RDp1HIU53rr2Taj8A3eiP28YAh/J9tKhIpffBecwVEnpstA3K VC2H/cn4ZLDDvb9R6NqeTgZlVPrA6ZsHAimekcgzh0So0FcoJcqoQZ4Qd8DIoAJiTn84oqed MeQ/003MwmMW9yUkqp/VWGmLeXLzYO91a9MwQ/U/WuonlrdCsT9Tpc+CQd9k1wg67VBaM0o9 gsCZ4Y542mePVmGZ6VNN1xMfdfNVa9My4kEFjiaGgPR5t3c04klfbMkcAIDaeRCds18Kc=
  • Ironport-sdr: GXeTiB2SgyDvJm3fPzDtkzy0g6gT0dKMvnX2obLRZhe0olS2wK8UJZwxWJJm+lJGd/mQgQvjm1 7tMik9700+oYOztd/ScvLYDlT3Q8D4u9TzGf0o8HQNcbYUb+3sXBewCVAOlC0MyDfjddcvoQQA 6KRCC7uCaV8Dcd98w3LKMjM+QecTUNABLfZEmo2HXBwErztugCYaj3zJK2TEjrNiI/x8vH3hVF 7nwmiHfxbwMLBtnMykvV7XznN4MNvSZYqaXNG7p4Fn/c8oBCJ4E5ib3mgl1PPvStQLTcuP22K0 Nmd7ZinQbvHnhHwMG040ELMn
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Sep 30, 2021 at 10:52:14AM +0300, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
> 
> When a PCI device gets assigned/de-assigned some work on vPCI side needs
> to be done for that device. Introduce a pair of hooks so vPCI can handle
> that.
> 
> Please note, that in the current design the error path is handled by
> the toolstack via XEN_DOMCTL_assign_device/XEN_DOMCTL_deassign_device,
> so this is why it is acceptable not to de-assign devices if vPCI's
> assign fails, e.g. the roll back will be handled on deassign_device when
> it is called by the toolstack.

It's kind of hard to see what would need to be rolled back, as the
functions are just dummies right now that don't perform any actions.

I don't think the toolstack should be the one to deal with the
fallout, as it could leave Xen in a broken state. The current commit
message doesn't provide any information about why it has been designed
this way.

> 
> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
> ---
> Since v2:
> - define CONFIG_HAS_VPCI_GUEST_SUPPORT so dead code is not compiled
>   for x86
> Since v1:
>  - constify struct pci_dev where possible
>  - do not open code is_system_domain()
>  - extended the commit message
> ---
>  xen/drivers/Kconfig           |  4 ++++
>  xen/drivers/passthrough/pci.c |  9 +++++++++
>  xen/drivers/vpci/vpci.c       | 23 +++++++++++++++++++++++
>  xen/include/xen/vpci.h        | 20 ++++++++++++++++++++
>  4 files changed, 56 insertions(+)
> 
> diff --git a/xen/drivers/Kconfig b/xen/drivers/Kconfig
> index db94393f47a6..780490cf8e39 100644
> --- a/xen/drivers/Kconfig
> +++ b/xen/drivers/Kconfig
> @@ -15,4 +15,8 @@ source "drivers/video/Kconfig"
>  config HAS_VPCI
>       bool
>  
> +config HAS_VPCI_GUEST_SUPPORT
> +     bool
> +     depends on HAS_VPCI

I would assume this is to go away once the work is finished? I don't
think it makes sense to split vPCI code between domU/dom0 on a build
time basis.

> +
>  endmenu
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index 9f804a50e780..805ab86ed555 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -870,6 +870,10 @@ static int deassign_device(struct domain *d, uint16_t 
> seg, uint8_t bus,
>      if ( ret )
>          goto out;
>  
> +    ret = vpci_deassign_device(d, pdev);
> +    if ( ret )
> +        goto out;
> +
>      if ( pdev->domain == hardware_domain  )
>          pdev->quarantine = false;
>  
> @@ -1429,6 +1433,11 @@ static int assign_device(struct domain *d, u16 seg, u8 
> bus, u8 devfn, u32 flag)
>          rc = hd->platform_ops->assign_device(d, devfn, pci_to_dev(pdev), 
> flag);
>      }
>  
> +    if ( rc )
> +        goto done;
> +
> +    rc = vpci_assign_device(d, pdev);
> +
>   done:
>      if ( rc )
>          printk(XENLOG_G_WARNING "%pd: assign (%pp) failed (%d)\n",
> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
> index 1666402d55b8..0fe86cb30d23 100644
> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -86,6 +86,29 @@ int __hwdom_init vpci_add_handlers(struct pci_dev *pdev)
>  
>      return rc;
>  }
> +
> +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
> +/* Notify vPCI that device is assigned to guest. */
> +int vpci_assign_device(struct domain *d, const struct pci_dev *dev)
> +{
> +    /* It only makes sense to assign for hwdom or guest domain. */
> +    if ( is_system_domain(d) || !has_vpci(d) )
> +        return 0;
> +
> +    return 0;
> +}
> +
> +/* Notify vPCI that device is de-assigned from guest. */
> +int vpci_deassign_device(struct domain *d, const struct pci_dev *dev)
> +{
> +    /* It only makes sense to de-assign from hwdom or guest domain. */
> +    if ( is_system_domain(d) || !has_vpci(d) )
> +        return 0;
> +
> +    return 0;
> +}
> +#endif /* CONFIG_HAS_VPCI_GUEST_SUPPORT */
> +
>  #endif /* __XEN__ */
>  
>  static int vpci_register_cmp(const struct vpci_register *r1,
> diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
> index 2e910d0b1f90..ecc08f2c0f65 100644
> --- a/xen/include/xen/vpci.h
> +++ b/xen/include/xen/vpci.h
> @@ -242,6 +242,26 @@ static inline bool vpci_process_pending(struct vcpu *v)
>  }
>  #endif
>  
> +#if defined(CONFIG_HAS_VPCI) && defined(CONFIG_HAS_VPCI_GUEST_SUPPORT)

You don't need to check for CONFIG_HAS_VPCI, as
CONFIG_HAS_VPCI_GUEST_SUPPORT already depends on CONFIG_HAS_VPCI being
set.

> +/* Notify vPCI that device is assigned/de-assigned to/from guest. */
> +int __must_check vpci_assign_device(struct domain *d,
> +                                    const struct pci_dev *dev);
> +int __must_check vpci_deassign_device(struct domain *d,
> +                                      const struct pci_dev *dev);
> +#else
> +static inline int vpci_assign_device(struct domain *d,
> +                                     const struct pci_dev *dev)
> +{
> +    return 0;
> +};
> +
> +static inline int vpci_deassign_device(struct domain *d,
> +                                       const struct pci_dev *dev)
> +{
> +    return 0;
> +};

You need the __must_check attributes here also to match the prototypes
above.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.