[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/5] print: introduce a format specifier for pci_sbdf_t



>>> On 10.05.19 at 18:10, <roger.pau@xxxxxxxxxx> wrote:
> The new format specifier is '%pp', and prints a pci_sbdf_t using the
> seg:bus:dev.func format. Replace all SBDFs printed using
> '%04x:%02x:%02x.%u' to use the new format specifier.

So on the positive side Linux doesn't use 'p' yet, so we're only at risk
of a future conflict. However, having to pass a 64-bit pointer just
to print a 32-bit entity seems rather wasteful to me. Since we can't
use entirely new format specifiers, did you consider (ab)using one
we rarely use, like %o, suffixed similarly like we do for %p? The
extension could be restricted to apply only when neither field width
nor precision nor any flags were specified, i.e. only to plain %o (at
least initially).

We'd then have something along the lines of

#define PRI_sbdf "op"
#define PRI_SBDF(v) ((v).sbdf)

and

    printk("%" PRI_sbdf ": ...\n", PRI_SBDF(pdev->sbdf), ...);

> --- a/xen/common/vsprintf.c
> +++ b/xen/common/vsprintf.c
> @@ -392,6 +392,20 @@ static char *print_vcpu(char *str, char *end, const 
> struct vcpu *v)
>      return number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);
>  }
>  
> +static char *print_pci_addr(char *str, char *end, const pci_sbdf_t *sbdf)
> +{
> +    str = number(str, end, sbdf->seg, 16, 4, -1, ZEROPAD);
> +    if ( str < end )
> +        *str = ':';
> +    str = number(str + 1, end, sbdf->bus, 16, 2, -1, ZEROPAD);
> +    if ( str < end )
> +        *str = ':';
> +    str = number(str + 1, end, sbdf->dev, 16, 2, -1, ZEROPAD);
> +    if ( str < end )
> +        *str = '.';
> +    return number(str + 1, end, sbdf->func, 10, -1, -1, 0);

It shouldn't really matter, but may I suggest to use 8 instead of 10
here?

> @@ -519,6 +533,10 @@ static char *pointer(char *str, char *end, const char 
> **fmt_ptr,
>      case 'v': /* d<domain-id>v<vcpu-id> from a struct vcpu */
>          ++*fmt_ptr;
>          return print_vcpu(str, end, arg);
> +
> +    case 'p': /* PCI SBDF. */
> +        ++*fmt_ptr;
> +        return print_pci_addr(str, end, arg);
>      }

Please insert at the alphabetically correct place.

> --- a/xen/drivers/passthrough/amd/iommu_acpi.c
> +++ b/xen/drivers/passthrough/amd/iommu_acpi.c
> @@ -717,9 +717,8 @@ static u16 __init parse_ivhd_device_special(
>          return 0;
>      }
>  
> -    AMD_IOMMU_DEBUG("IVHD Special: %04x:%02x:%02x.%u variety %#x handle 
> %#x\n",
> -                    seg, PCI_BUS(bdf), PCI_SLOT(bdf), PCI_FUNC(bdf),
> -                    special->variety, special->handle);
> +    AMD_IOMMU_DEBUG("IVHD Special: %pp variety %#x handle %#x\n",
> +                    &PCI_SBDF2_T(seg, bdf), special->variety, 
> special->handle);

The inefficiency of the%p-based approach is perhaps best seen with an
example like this: The compiler will have to instantiate an unnamed variable
on the stack to hold the value of the compound literal, just to be able to
take its address.

> @@ -900,14 +891,10 @@ int pci_release_devices(struct domain *d)
>          return ret;
>      }
>      while ( (pdev = pci_get_pdev_by_domain(d, -1, -1, -1)) )
> -    {
> -        bus = pdev->sbdf.bus;
> -        devfn = pdev->sbdf.extfunc;
> -        if ( deassign_device(d, pdev->sbdf.seg, bus, devfn) )
> -            printk("domain %d: deassign device (%04x:%02x:%02x.%u) 
> failed!\n",
> -                   d->domain_id, pdev->sbdf.seg, bus,
> -                   PCI_SLOT(devfn), PCI_FUNC(devfn));
> -    }
> +        if ( deassign_device(d, pdev->sbdf.seg, pdev->sbdf.bus,
> +                             pdev->sbdf.extfunc) )
> +            printk("domain %d: deassign device (%pp) failed!\n",
> +                   d->domain_id, &pdev->sbdf);

Could you switch to %pd here (and elsewhere) at the same time?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.