[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 4/4] iommu / pci: re-implement XEN_DOMCTL_get_device_group...


  • To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Wed, 24 Jul 2019 15:20:13 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+1HPnmsvXdz5Av1JGh38239nJxWE0ILbT3YCvby4h6c=; b=E4LaBNptbjfjG6Y/5k3NOVtG1ZIXOhzvMNCvNbaLG13RWjGXXbtaAlt4rWUEcv9lzV1DYA4waq8GxOuazm+AREy3fONGnDx1ei1xvYRa8uPk4GaEqPdauHnXSDgcNEZehLGM4eARjxPXhTK3nl/87UkZZCUzw6zph59tJZjmAJywtvcrv9jileiZGFIGuimDH8U1piMVw5mfjWdko9ZtcLcyJlYcPFf/csnRAgLZaTh4ZZnNveWSc638FSdYYf7uwhng2UThSy9EuloVSwFvDa0BFkC+iCd29FZyhXtRVIVmEKo/6C1uHk84ebfPLeSgIHGRd3BP30wVWZBd6zD17g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mbV/0hrsqcdJraAgb+0Zm2uHYYX3udV0D6ar3AH15umRyYkXOH8pDCBSh1xDMk6KD3sfAp+B1Z8khugouEa9X2B8Hjk7qid01hdOuq8rZf6bVHEilXGteiWSqA+sIittPLPtERm1VLPZ2B7R1gFSYaVChYu/Ox+jcQFTVFKtQQ7cw3CNpB9T0JRpVzjUERcU4ohOFyDE0nH0xsCiZQgyAeB0M9rYRvfVLMVPJUTMKPug8KM3TSvk56P0WD45tzZjTR62Fiin8TL1b+l+CrKXP7eFDVAkBw4/2HgAYJImlos192uM2yww4WEW42WxxkW2mwkUm8A2+423vm8ZEzVJ4w==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Delivery-date: Wed, 24 Jul 2019 15:21:30 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVO7+30zOrUBDJqkW0fIy31yR4g6bM+heAgAAtwjCADMc0gA==
  • Thread-topic: [Xen-devel] [PATCH v3 4/4] iommu / pci: re-implement XEN_DOMCTL_get_device_group...

On 16.07.2019 14:20, Paul Durrant wrote:
>> From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
>> Sent: 16 July 2019 12:28
>>
>> On Tue, Jul 16, 2019 at 11:16:57AM +0100, Paul Durrant wrote:
>>> @@ -81,6 +85,48 @@ int iommu_group_assign(struct pci_dev *pdev, void *arg)
>>>       return 0;
>>>   }
>>>
>>> +int iommu_get_device_group(struct domain *d, pci_sbdf_t sbdf,
>>> +                           XEN_GUEST_HANDLE_64(uint32) buf,
>>> +                           unsigned int max_sdevs)
>>> +{
>>> +    struct iommu_group *grp = NULL;
>>> +    struct pci_dev *pdev;
>>> +    unsigned int i = 0;
>>> +
>>> +    pcidevs_lock();
>>> +
>>> +    for_each_pdev ( d, pdev )
>>> +    {
>>> +        if ( pdev->sbdf.sbdf == sbdf.sbdf )
>>> +        {
>>> +            grp = pdev->grp;
>>> +            break;
>>> +        }
>>> +    }
>>> +
>>> +    if ( !grp )
>>> +        goto out;
>>> +
>>> +    for_each_pdev ( d, pdev )
>>> +    {
>>> +        if ( xsm_get_device_group(XSM_HOOK, pdev->sbdf.sbdf) ||
>>> +             pdev->grp != grp )
>>> +            continue;
>>> +
>>> +        if ( i < max_sdevs &&
>>
>> AFAICT you are adding the check here in order to keep current
>> behaviour?
> 
> Yes.
> 
>> But isn't it wrong to not report to the caller that the buffer was
>> smaller than required, and that the returned result is partial?
> 
> Given that there is zero documentation I think your guess is as good
> as mine as to what intention of the implementor was.
> 
>>
>> I don't see any way a caller can differentiate between a result that
>> uses the full buffer and one that's actually partial due to smaller
>> than required buffer provided. I think this function should return
>> -ENOSPC for such case.
> 
> I'd prefer to stick to the principle of no change in behaviour. TBH I
> have not found any caller of xc_get_device_group() apart from a python
> binding and who knows what piece of antiquated code might sit on the
> other side of that. FWIW that code sets max_sdevs to 1024 so it's
> unlikely to run out of space so an ENOSPC might be ok. Still, I'd like
> to hear maintainer opinions on this.

How about we try to find a sufficiently backwards compatible solution
which still allows recognizing insufficient buffer space? First of all
the common null-handle approach could be used to get a total count.
There's not much risk of this value getting stale between two
successive domctl-s. And then returning -ENOBUFS upon exceeding the
provided buffer shouldn't really break pre-existing code: It surely
would misbehave anyway if the group was larger than what they think
it is.

Another option would be to isolate all of this compatibility stuff
into the libxc wrapper, making the hypercall itself return the actual
count irrespective of the passed in buffer size (i.e. a generalization
of the null-handle model).

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.