|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v5 08/14] vpci/header: program p2m with guest BAR view
On 02.02.22 12:34, Roger Pau Monné wrote:
> On Wed, Feb 02, 2022 at 09:46:21AM +0000, Oleksandr Andrushchenko wrote:
>>>>> + gdprintk(XENLOG_G_DEBUG,
>>>>> + "%smap [%lx, %lx] -> %#"PRI_gfn" for %pd\n",
>>>>> + map->map ? "" : "un", s, e, gfn_x(start_gfn),
>>>>> + map->d);
>>>> That's too chatty IMO, I could be fine with printing something along
>>>> this lines from modify_bars, but not here because that function can be
>>>> preempted and called multiple times.
>>> Ok, will move to modify_bars as these prints are really helpful for debug
>> I tried to implement the same, but now in init_bars:
>>
>> diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
>> index 667c04cee3ae..92407e617609 100644
>> --- a/xen/drivers/vpci/header.c
>> +++ b/xen/drivers/vpci/header.c
>> @@ -57,10 +57,6 @@ static int map_range(unsigned long s, unsigned long e,
>> void *data,
>> */
>> start_gfn = gfn_add(start_gfn, s - mfn_x(start_mfn));
>>
>> - gdprintk(XENLOG_G_DEBUG,
>> - "%smap [%lx, %lx] -> %#"PRI_gfn" for %pd\n",
>> - map->map ? "" : "un", s, e, gfn_x(start_gfn),
>> - map->d);
>> /*
>> * ARM TODOs:
>> * - On ARM whether the memory is prefetchable or not should be
>> passed
>> @@ -258,6 +254,28 @@ static void defer_map(struct domain *d, struct pci_dev
>> *pdev,
>> raise_softirq(SCHEDULE_SOFTIRQ);
>> }
>>
>> +static int print_range(unsigned long s, unsigned long e, void *data)
>> +{
>> + const struct map_data *map = data;
>> +
>> + for ( ; ; )
>> + {
>> + gfn_t start_gfn = _gfn(PFN_DOWN(is_hardware_domain(map->d)
>> + ? map->bar->addr
>> + : map->bar->guest_reg));
>> + mfn_t start_mfn = _mfn(PFN_DOWN(map->bar->addr));
>> +
>> + start_gfn = gfn_add(start_gfn, s - mfn_x(start_mfn));
>> +
>> + gdprintk(XENLOG_G_DEBUG,
>> + "%smap [%lx, %lx] -> %#"PRI_gfn" for %pd\n",
>> + map->map ? "" : "un", s, e, gfn_x(start_gfn),
>> + map->d);
>> + }
> This is an infinite loop AFAICT. Why do you need the for for?
>
>> +
>> + return 0;
>> +}
>> +
>> static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool
>> rom_only)
>> {
>> struct vpci_header *header = &pdev->vpci->header;
>> @@ -423,7 +441,25 @@ static int modify_bars(const struct pci_dev *pdev,
>> uint16_t cmd, bool rom_only)
>> if ( !map_pending )
>> pci_conf_write16(pdev->sbdf, PCI_COMMAND, cmd);
>> else
>> + {
>> + struct map_data data = {
>> + .d = pdev->domain,
>> + .map = cmd & PCI_COMMAND_MEMORY,
>> + };
>> +
>> + for ( i = 0; i < ARRAY_SIZE(header->bars); i++ )
>> + {
>> + const struct vpci_bar *bar = &header->bars[i];
>> +
>> + if ( rangeset_is_empty(bar->mem) )
>> + continue;
>> +
>> + data.bar = bar;
>> + rc = rangeset_report_ranges(bar->mem, 0, ~0ul, print_range,
>> &data);
> Since this is per-BAR we should also print that information and the
> SBDF of the device, ie:
>
> %pd SBDF: (ROM)BAR%u %map [%lx, %lx] -> ...
>
>> + }
>> +
>> defer_map(dev->domain, dev, cmd, rom_only);
>> + }
>>
>> return 0;
>>
>>
>> To me, to implement a single DEBUG print, it is a bit an overkill.
>> I do understand your concerns that "that function can be
>> preempted and called multiple times", but taking look at the code
>> above I think we can accept that for DEBUG builds.
> It might be better if you print the per BAR positions at the top of
> modify_bars, where each BAR is added to the rangeset? Or do you care
> about reporting the holes also?
First of all I didn't run this code, so it is just to show the complexity
If the approach itself is ok. If it is then I'll get it working: please
do not review it literally yet.
The original print was used to show only those {un}mappings that
we actually do, no holes etc., so we need to print at the bottom of
the init_bars, e.g. when the rangesets are all ready.
Again, IMO, adding such a big piece of DEBUG code instead of
printing a single DEBUG message could be a bit expansive.
I still hear your concerns on *when* it is printed, but still think we can
allow that.
>
> Thanks, Roger.
Thank you,
Oleksandr
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |