[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][RFC][PATCH 05/13] hvmloader/mmio: reconcile guest mmio with reserved device memory



>>> On 27.10.14 at 08:12, <tiejun.chen@xxxxxxxxx> wrote:
> On 2014/10/24 22:42, Jan Beulich wrote:
>>>>> On 24.10.14 at 09:34, <tiejun.chen@xxxxxxxxx> wrote:
>>> --- a/tools/firmware/hvmloader/pci.c
>>> +++ b/tools/firmware/hvmloader/pci.c
>>> @@ -37,6 +37,44 @@ uint64_t pci_hi_mem_start = 0, pci_hi_mem_end = 0;
>>>   enum virtual_vga virtual_vga = VGA_none;
>>>   unsigned long igd_opregion_pgbase = 0;
>>>
>>> +unsigned int need_skip_rmrr = 0;
>>
>> Static (and without initializer)?
> 
> static unsigned int need_skip_rmrr;

Please stop echoing back what was requested.

>>> +                           uint64_t mmio_start, uint64_t mmio_size)
>>> +{
>>> +    if ( start + memsize <= mmio_start || start >= mmio_start + mmio_size )
>>> +        return 0;
>>> +    else
>>> +        return 1;
>>
>> Make this a simple single return statement?
> 
> static int check_mmio_hole_conflict(uint64_t start, uint64_t memsize,
>                                      uint64_t mmio_start, uint64_t 
> mmio_size)
> {
>      if ( start + memsize <= mmio_start || start >= mmio_start + mmio_size )
>          return 0;
> 
>      return 1;
> }

Is "a simple single return statement" ambiguous in any way? This

static bool check_mmio_hole_conflict(uint64_t start, uint64_t memsize,
                                      uint64_t mmio_start, uint64_t mmio_size)
{
     return start + memsize > mmio_start && start < mmio_start + mmio_size;
}

is how I think this should be.

>>> +}
>>> +
>>> +static int check_reserved_device_memory_map(uint64_t mmio_base,
>>> +                                            uint64_t mmio_max)
>>> +{
>>> +    uint32_t i = 0;
>>
>> Pointless initializer.
>>
>>> +    uint64_t rdm_start, rdm_end;
>>> +    int nr_entries = -1;
>>
>> And again.
>>
>>> +
>>> +    nr_entries = hvm_get_reserved_device_memory_map();
>>
>> It's completely unclear why this can't be the variable's initializer.
> 
>      uint32_t i;
>      uint64_t rdm_start, rdm_end;
>      int nr_rdm_entries = hvm_get_reserved_device_memory_map();

And (see also below) "unsigned int". It's bogus anyway to have the
function return the count by normal means by the actual array via a
global variable. I think you ought to switch to a consistent model.

>>> +    for ( i = 0; i < nr_entries; i++ )
>>> +    {
>>> +        rdm_start = rdm_map[i].start_pfn << PAGE_SHIFT;
>>> +        rdm_end = rdm_start + (rdm_map[i].nr_pages << PAGE_SHIFT);
>>
>> I'm pretty certain I pointed out before that you can't simply shift
>> these fields - you risk losing significant bits.
> 
> I tried to go back looking into something but just found you were saying 
> I shouldn't use PAGE_SHIFT and PAGE_SIZE at the same time. If I'm still 
> missing could you show me what you expect?

Shifting a 32-bit quantity left still yields a 32-bit quantity, no matter
whether the result is then stored in a 64-bit variable. You need to
up-cast the left side of the shift first.

>>> @@ -58,7 +96,9 @@ void pci_setup(void)
>>>           uint32_t bar_reg;
>>>           uint64_t bar_sz;
>>>       } *bars = (struct bars *)scratch_start;
>>> -    unsigned int i, nr_bars = 0;
>>> +    unsigned int i, j, nr_bars = 0;
>>> +    int nr_entries = 0;
>>
>> And another pointless initializer. Plus as a count of something this
> 
> int nr_rdm_entries;
> 
>> surely wants to be "unsigned int". Also I guess the variable name
> 
> nr_rdm_entries should be literally unsigned int but this value always be 
> set from  hvm_get_reserved_device_memory_map(),
> 
> nr_rdm_entries = hvm_get_reserved_device_memory_map()
> 
> I hope that return value can be negative value in some failed case

If only you checked for these negative values...

> Additionally, actually there are some original codes just following my 
> codes:
> 
>          if ( need_skip_rmrr )
>          {
>               ...
>          }
> 
>       base += bar_sz;
> 
>          if ( (base < resource->base) || (base > resource->max) )
>          {
>              printf("pci dev %02x:%x bar %02x size "PRIllx": no space for "
>                     "resource!\n", devfn>>3, devfn&7, bar_reg,
>                     PRIllx_arg(bar_sz));
>              continue;
>          }
> 
> This can guarantee we don't overwhelm the previous mmio range.

Resulting in the BAR not getting a value assigned afaict. Certainly
not what we want as a side effect of your changes.

>> and bar_data_upper will likely end up being garbage.
>>
>> Did you actually _test_ this code?
> 
> Actually in my real case those RMRR ranges are always below MMIO.

Below whose MMIO? The host's or the guest's? In the latter case,
just (in order to test your code) increase the range reserved for
MMIO enough to cover the RMRR range.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.