[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/4] iommu: introduce iommu_groups


  • To: Paul Durrant <paul.durrant@xxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Wed, 24 Jul 2019 14:29:40 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1NkwCkZf1um+6L5zZVe5FGC8TQOMLrQwFg0gMwEg3h8=; b=AHySSzyehetY64UUgWv5fjrPatldXZ4asN40ZChtEGTvkO7NWES5gtMA2BESzh+iNOBTT2SWEwCdWOC5CHNxTpNOQfovvXWzospK3GpWGxwkozZ4UWyXQ/fk+i5m0u2uC6/Tx/fymYUSNC4wUMUpQDwKI4cCOIkuGDA3UXET5dTFcQ+OO0yGzd3hnGNV0seFI05q0PbRf2/qX2bW6IxP7p/WSGsS5+Ol/P/D7C0ll7f32kv3XoHasDH06VrabrA2GnWeeL1jgPB/EKxGyOl7GAr2FNe8rfPwZbbTu89UMJ4bjN+NwkNnUhS3k6o7Mm7jRuFnHSMzhDEXI3DTuQPy1A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NbriqHToC6YcmRagNkvd6AgMo7bqLqWDFXOfdeWEj9iLahj6WXPztP/6Sv9npkihhmareLhTqNYu4Mcn+d4s8NIsGU/Fzn4w5wMNCGP5nqI/oEfCxrXlpbQ78+ov/MpeP3/JEcJKIZwhjcdj5ARmRyVtMZrB1pWQW/z93Rws8Rox4thf1kf7TTVOEx20y13Xt/jOHKeXv3PDAPjUiGxBDt5DLtCIhDQaGcUj0u0W3RTcxKNbclLWKzn9FhsYviZ3eZpatRc6ZSEhmRnYzbYf6cVdQPYZoaAW8zphB8Ze362sMkV+ksUqqv0AtQYlOFocsdafZVF0TudEKac+GDHe5Q==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, TimDeegan <tim@xxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 24 Jul 2019 14:33:44 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVO7+5rqBhdjdwYEadjO/an1ZP4qbZ4O0A
  • Thread-topic: [PATCH v3 3/4] iommu: introduce iommu_groups

On 16.07.2019 12:16, Paul Durrant wrote:
> --- a/xen/drivers/passthrough/Makefile
> +++ b/xen/drivers/passthrough/Makefile
> @@ -4,6 +4,7 @@ subdir-$(CONFIG_X86) += x86
>  subdir-$(CONFIG_ARM) += arm
>  
>  obj-y += iommu.o
> +obj-$(CONFIG_HAS_PCI) += groups.o

I assume this dependency on PCI is temporary, as there's nothing
inherently tying grouping of devices to PCI (afaict)?

> +int iommu_group_assign(struct pci_dev *pdev, void *arg)
> +{
> +    const struct iommu_ops *ops = iommu_get_ops();
> +    int id;
> +    struct iommu_group *grp;
> +
> +    if ( !ops->get_device_group_id )
> +        return 0;

With you making groups mandatory (i.e. even solitary devices getting
put in a group), shouldn't this be -EOPNOTSUPP, maybe accompanied by
ASSERT_UNREACHABLE()?

> +    id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn);
> +    if ( id < 0 )
> +        return -ENODATA;
> +
> +    grp = get_iommu_group(id);
> +    if ( !grp )
> +        return -ENOMEM;
> +
> +    if ( iommu_verbose )
> +        printk(XENLOG_INFO "Assign %04x:%02x:%02x.%u -> IOMMU group %x\n",
> +               pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
> +               PCI_FUNC(pdev->devfn), grp->id);

I'm not overly happy about this new logging: On modern systems a
debug level run is already rather verbose about PCI devices,
simply because there are so many. If my hope to not see individual
devices put in groups is not going to be fulfilled, can we at least
try to come to some agreement that certain devices which can't
sensibly be passed through won't be assigned groups (and hence
won't produce output here)? A group-less device then would
automatically be unable to have its owner changed.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.