[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 06/14] xen/arm: Add support for PCI ecam operations


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Rahul Singh <Rahul.Singh@xxxxxxx>
  • Date: Thu, 16 Sep 2021 16:51:16 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=KXAwJSeiZbEQxNNV56SDXxraOP0Fs0R+3mN2KqTATQ8=; b=KCgdOMKR9kukzMu5IIgp7gKbqj9bzbG/4XSRBYSOMgzHdx+WuiaiBGR1XtnoVfIw0dMNEadhMol1XKVa1dp1WjLvv046sX4DSSI7fUDGJHeBNgYy7/SYXxsKVMYP81gLt8vyU3wYfhsHXRbeB5JasIpExuWI8VQ8OsOw5yaX6dAdTg+IDzAyRYGuxdpyz8oGg7x3eAeFZYEQdrI5uM1JbkcyJehZQU8SXtWMvmRCDOirRixGtwpETS8RTPrtlmm4pcKWSVNSket/w4+06TsYxfhjYWLYQhigZMpMIvph70EUz4ua0y9cHWN2m92vNql4KLd5jNc9BMLomDuQC2+Ofg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=i8o9D0EdBec1BKSSpSsyPM0pLz8iMYQ5P26b8JvOcZG7NNvFCycCi+43hVLnJFT+KvXmSYBMBVU/RPezxACMfoVe9CJPZDalYjb2GCyi4cRhZiILOf+YILNAF57FNnK9Bn6nNYlefUT56Mzasp7VtrdeL6njTEYubkqsJvvUlo5G4zPWzD8vpq6ozDwmf98Je1X27SgLEHbk8lTq/ko8CybApoSKvAFRg8o0ZTAa1u/ORq5+JH8kR1M6wuzpI6pYdryLODegjfkBqS4BGTrUJ0auhaaXEE7R2PKwojPYYlR3TnhOEgiiDcyAUrobxHxkZ0nWeqVmd0y3JW9BsIVFpw==
  • Authentication-results-original: kernel.org; dkim=none (message not signed) header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Thu, 16 Sep 2021 16:51:52 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: kernel.org; dkim=none (message not signed) header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHXlPK6hMBa2cV9wU+RB+r01dTauquceRAAgAcQPwCAAMcsAIABJesAgABE9YCAAVDgAA==
  • Thread-topic: [PATCH v1 06/14] xen/arm: Add support for PCI ecam operations

Hi Stefano,

> On 15 Sep 2021, at 9:45 pm, Stefano Stabellini <sstabellini@xxxxxxxxxx> wrote:
> 
> On Wed, 15 Sep 2021, Rahul Singh wrote:
>>> On 15 Sep 2021, at 12:06 am, Stefano Stabellini <sstabellini@xxxxxxxxxx> 
>>> wrote:
>>> On Tue, 14 Sep 2021, Rahul Singh wrote:
>>>>>> +        return NULL;
>>>>>> +
>>>>>> +    busn -= cfg->busn_start;
>>>>>> +    base = cfg->win + (busn << cfg->ops->bus_shift);
>>>>>> +
>>>>>> +    return base + (PCI_DEVFN(sbdf_t.dev, sbdf_t.fn) << devfn_shift) + 
>>>>>> where;
>>>>>> +}
>>>>> 
>>>>> I understand that the arm32 part is not implemented and not part of this
>>>>> series, that's fine. However if the plan is that arm32 will dynamically
>>>>> map each bus individually, then I imagine this function will have an
>>>>> ioremap in the arm32 version. Which means that we also need an
>>>>> unmap_bus call in struct pci_ops. I understand that pci_ecam_unmap_bus
>>>>> would be a NOP today for arm64, but I think it makes sense to have it if
>>>>> we want the API to be generic.
>>>> 
>>>> As per my understanding we don’t need pci_ecam_unmap_bus(..) as I don’t 
>>>> see any use case to unmap the 
>>>> bus dynamically. We can add the support for per_bus_mapping for ARM32 once 
>>>> we implement arm32 part.
>>>> Let me know your view on this.
>>> 
>>> In the patch titled "xen/arm: PCI host bridge discovery within XEN on
>>> ARM" there is the following in-code comment:
>>> 
>>> * On 64-bit systems, we do a single ioremap for the whole config space
>>> * since we have enough virtual address range available.  On 32-bit, we
>>> * ioremap the config space for each bus individually.
>>> *
>>> * As of now only 64-bit is supported 32-bit is not supported.
>>> 
>>> 
>>> So I take it that on arm32 we don't have enough virtual address range
>>> available, therefore we cannot ioremap the whole range. Instead, we'll
>>> have to ioremap the config space of each bus individually.
>> 
>> Yes you are right my understand is also same.
>>> 
>>> I assumed that the idea was to call ioremap and iounmap dynamically,
>>> otherwise the total amount of virtual address range required would still
>>> be the same.
>> 
>> As per my understanding for 32-bit we need per_bus mapping as we don’t have 
>> enough virtual address space in one chunk
>> but we can have virtual address space in different chunk.
> 
> Interesting. I would have assumed that the sum of all the individual
> smaller ioremaps would still be equal to one big ioremap. Maybe for
> Linux is different, but I don't think that many smaller ioremaps would
> buy us very much in Xen because it is the total ioremap virtual space
> that is too small. Or am I missing something?
> 
> 
>> I am not sure if we need to map/unmap the virtual address space for each 
>> read/write call. 
>> I just checked the Linux code[1]  and there also mapping is done once not 
>> for each read/write call.
> 
> So my guess is that for arm32 we would have to resort to dynamic
> map/unmap for each read/write call, unless there is a trick with the
> individual smaller ioremaps that I haven't spotted (e.g. maybe something
> doesn't get mapped that way?)
> 
> That said, given that we are uncertain about this and the arm32
> implementation is nowhere close, I think that we are OK to continue like
> this for this series. Maybe you could add a couple of sentences to the
> in-code comment so that if somebody wants to jump in and implement
> arm32 support they would know where to start.

I am ok with both ways adding comment in code to explain or implement the 
pci_ecam_add_bus(..) and pci_ecam_remove_bus() like Linux [1] and 
we can call those function in pci_read()/pci_write.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/ecam.c#n126


Regards,
Rahul


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.