[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v3][PATCH 07/16] hvmloader/pci: skip reserved ranges



On 2015/6/18 14:29, Jan Beulich wrote:
On 18.06.15 at 08:17, <tiejun.chen@xxxxxxxxx> wrote:
On 2015/6/17 17:24, Jan Beulich wrote:
On 17.06.15 at 11:18, <tiejun.chen@xxxxxxxxx> wrote:
On 2015/6/17 17:02, Jan Beulich wrote:
On 17.06.15 at 10:26, <tiejun.chen@xxxxxxxxx> wrote:
Something hits me to generate another idea,

#1. Still allocate all devices as before.
#2. Lookup all actual bars to check if they're conflicting RMRR

We can skip these bars to keep zero. Then later it would make lookup easily.

#3. Need to reallocate these conflicting bars.
#3.1 Trying to reallocate them with the remaining resources
#3.2 If the remaining resources aren't enough, we need to allocate them
from high_mem_resource.

That's possible onyl for 64-bit BARs.

You're right so this means its not proper to adjust mmio_total to
include conflicting reserved ranges and finally moved all conflicting
bars to high_mem_resource as Kevin suggested previously, so i high
level, we still need to decrease pci_mem_start to populate more RAM to
compensate them as I did, right?

You probably should do both: Prefer moving things beyond 4Gb,
but if not possible increase the MMIO hole.


I'm trying to figure out a better solution. Perhaps we can allocate
32-bit bars and 64-bit bars orderly. This may help us bypass those
complicated corner cases.

Dealing with 32- and 64-bit BARs separately won't help at all, as

More precisely I'm saying to deal with them orderly.

there may only be 32-bit ones, or the set of 32-bit ones may
already require you to do re-arrangements. Plus, for compatibility

Yes but I don't understand they are specific cases to my idea.

reasons (just like physical machines' BIOSes do) avoiding to place
MMIO above 4Gb where possible is still a goal.

So are you sure you see my idea completely? I don't intend to expand pci memory above 4GB.

Let me clear this simply,

#1. I'm still trying to allocate all 32bit bars from [pci_mem_start,pci_mem_end] as before.

#2. But [pci_mem_start,pci_mem_end] mightn't enough cover all 32bit-bars again because of RMRR, right? So I will populate RAM to push downward at cur_pci_mem_start ( = pci_mem_start - reserved device memory), then allocate the remaining 32bit-bars from [cur_pci_mem_start , pci_mem_start]

#3. Then I'm still trying to allocate 64bit-bars from [pci_mem_start,pci_mem_end], unless its not enough. This is just going to follow the original.

So anything is breaking that goal? And overall, its same as the original.


#1. We don't calculate how much memory should be compensated to add them
to expand something like we though previously.

#2. Instead, before allocating bars, we just check if reserved device
memory is really conflicting this default region [pci_mem_start,
pci_mem_end]

#2.1 If not, obviously nothing is changed.
#2.2 If yes, we introduce a new local bool, bar32_allocating, which
indicates if we want to allocate 32-bit bars and 64-bit bars separately.

So here we should set as true, and we also need to set 'bar64_relocate'
to relocate bars to 64-bit.


'bar64_relocate' doesn't indicate we always allocate them from highmem. Instead, we're trying to fist allocate them from low pci memory, but if low memory is not enough to allocate, then we'll relocate bars to 64-bit. This is a original mechanism and I just use that.

Doesn't look like the right approach to me. As said before, I think

Could you see what I'm saying again? I just feel you don't understand what you mean. If you still think I'm wrong let me know.

you should allocate BARs _around_ reserved regions (perhaps

I don't involve to allocate BAR directly.

filling non-aligned areas first, again utilizing that BARs are always
a power of 2 in size).

We're populating RAM aligned to *page* before allocating as before.

Thanks
Tiejun


Jan




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.