[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][RFC][PATCH 05/13] hvmloader/mmio: reconcile guest mmio with reserved device memory



>>> On 24.10.14 at 09:34, <tiejun.chen@xxxxxxxxx> wrote:
> --- a/tools/firmware/hvmloader/pci.c
> +++ b/tools/firmware/hvmloader/pci.c
> @@ -37,6 +37,44 @@ uint64_t pci_hi_mem_start = 0, pci_hi_mem_end = 0;
>  enum virtual_vga virtual_vga = VGA_none;
>  unsigned long igd_opregion_pgbase = 0;
>  
> +unsigned int need_skip_rmrr = 0;

Static (and without initializer)?

> +
> +/*
> + * Check whether there exists mmio hole in the specified memory range.
> + * Returns 1 if exists, else returns 0.
> + */
> +static int check_mmio_hole_confliction(uint64_t start, uint64_t memsize,

I don't think the word "confliction" exists. "conflict" please.

> +                           uint64_t mmio_start, uint64_t mmio_size)
> +{
> +    if ( start + memsize <= mmio_start || start >= mmio_start + mmio_size )
> +        return 0;
> +    else
> +        return 1;

Make this a simple single return statement?

> +}
> +
> +static int check_reserved_device_memory_map(uint64_t mmio_base,
> +                                            uint64_t mmio_max)
> +{
> +    uint32_t i = 0;

Pointless initializer.

> +    uint64_t rdm_start, rdm_end;
> +    int nr_entries = -1;

And again.

> +
> +    nr_entries = hvm_get_reserved_device_memory_map();

It's completely unclear why this can't be the variable's initializer.

> +
> +    for ( i = 0; i < nr_entries; i++ )
> +    {
> +        rdm_start = rdm_map[i].start_pfn << PAGE_SHIFT;
> +        rdm_end = rdm_start + (rdm_map[i].nr_pages << PAGE_SHIFT);

I'm pretty certain I pointed out before that you can't simply shift
these fields - you risk losing significant bits.

> +        if ( check_mmio_hole_confliction(rdm_start, (rdm_end - rdm_start),

Pointless parentheses.

> +                                         mmio_base, mmio_max - mmio_base) )
> +        {
> +            need_skip_rmrr++;
> +        }
> +    }
> +
> +    return nr_entries;
> +}
> +
>  void pci_setup(void)
>  {
>      uint8_t is_64bar, using_64bar, bar64_relocate = 0;
> @@ -58,7 +96,9 @@ void pci_setup(void)
>          uint32_t bar_reg;
>          uint64_t bar_sz;
>      } *bars = (struct bars *)scratch_start;
> -    unsigned int i, nr_bars = 0;
> +    unsigned int i, j, nr_bars = 0;
> +    int nr_entries = 0;

And another pointless initializer. Plus as a count of something this
surely wants to be "unsigned int". Also I guess the variable name
is too generic - nr_rdm_entries perhaps?

> @@ -363,11 +411,29 @@ void pci_setup(void)
>              bar_data &= ~PCI_BASE_ADDRESS_IO_MASK;
>          }
>  
> + reallocate_mmio:
>          base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
>          bar_data |= (uint32_t)base;
>          bar_data_upper = (uint32_t)(base >> 32);
>          base += bar_sz;
>  
> +        if ( need_skip_rmrr )
> +        {
> +            for ( j = 0; j < nr_entries; j++ )
> +            {
> +                rdm_start = rdm_map[j].start_pfn << PAGE_SHIFT;
> +                rdm_end = rdm_start + (rdm_map[j].nr_pages << PAGE_SHIFT);
> +                if ( check_mmio_hole_confliction(rdm_start,
> +                                                 (rdm_end - rdm_start),
> +                                                 base, bar_sz) )

"base" was already updated by this point.

> +                {
> +                    resource->base = rdm_end;
> +                    need_skip_rmrr--;
> +                    goto reallocate_mmio;

If you ever get here, the earlier determination of whether the MMIO
hole is large enough may get invalidated. I.e. I'm afraid this approach
is still to simplistic. Also to way you do the retry, the resulting bar_data
and bar_data_upper will likely end up being garbage.

Did you actually _test_ this code?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.