[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v3][PATCH 07/16] hvmloader/pci: skip reserved ranges



On 2015/6/17 16:05, Jan Beulich wrote:
On 17.06.15 at 09:54, <tiejun.chen@xxxxxxxxx> wrote:
On 2015/6/17 15:19, Jan Beulich wrote:
On 17.06.15 at 09:10, <tiejun.chen@xxxxxxxxx> wrote:
Yeah, this may waste some spaces in this worst case but I this think
this can guarantee our change don't impact on the original expectation,
right?

"Some space" may be multiple Gb (e.g. the frame buffer of a graphics

Sure.

card), which is totally unacceptable.


But then I don't understand what's your way. How can we fit all pci
devices just with "the smallest power-of-2 region enclosing the reserved
device memory"?

For example, the whole pci memory is sitting at
[0xa0000000, 0xa2000000]. And there are two PCI devices, A and B. Note
each device needs to be allocated with 0x1000000. So if without
concerning RMRR,

A. [0xa0000000,0xa1000000]
B. [0xa1000000,0xa2000000]

But if one RMRR resides at [0xa0f00000, 0xa1f00000] which obviously
generate its own alignment with 0x1000000. So the pci memory is expended
as [0xa0000000, 0xa3000000], right?

Then actually the whole pci memory can be separated three segments like,

#1. [0xa0000000, 0xa0f00000]
#2. [0xa0f00000, 0xa1f00000] -> RMRR would occupy
#3. [0xa1f00000, 0xa3000000]

So just #3 can suffice to allocate but just for one device, right?

Right, i.e. this isn't even sufficient - you need [a0000000,a3ffffff]
to fit everything (but of course you can put smaller BARs into the
unused ranges [a0000000,a0efffff] and [a1f00000,a1ffffff]).

Yes, I knew there's this sort of hole that we should use efficiently as you said. And I also thought about this way previously but current pci allocation framework isn't friend to implement this easily,

    /* Assign iomem and ioport resources in descending order of size. */
    for ( i = 0; i < nr_bars; i++ )
    {

I mean it isn't easy to calculate what's the most sufficient size in advance, and its also difficult to find to fit a appropriate pci device into those "holes", so see below,

That's why I said it's not going to be tricky to get all corner cases
right _and_ not use up more space than needed.

ought to work out the smallest power-of-2 region enclosing the

Okay. I remember the smallest size of a given PCI I/O space is 8 bytes,
and the smallest size of a PCI memory space is 16 bytes. So

/* At least 16 bytes to align a PCI BAR size. */
uint64_t align = 16;

reserved_start = memory_map.map[j].addr;
reserved_size = memory_map.map[j].size;

reserved_start = (reserved_star + align) & ~(align - 1);
reserved_size = (reserved_size + align) & ~(align - 1);

Is this correct?

Simply aligning the region doesn't help afaict. You need to fit it
with the other MMIO allocations.

I guess you're saying just those mmio allocations conflicting with RMRR?
But we don't know these exact addresses until we finalize to allocate
them, right?

That's the point - you need to allocate them _around_ the reserved
regions.


Something hits me to generate another idea,

#1. Still allocate all devices as before.
#2. Lookup all actual bars to check if they're conflicting RMRR

We can skip these bars to keep zero. Then later it would make lookup easily.

#3. Need to reallocate these conflicting bars.
#3.1 Trying to reallocate them with the remaining resources
#3.2 If the remaining resources aren't enough, we need to allocate them from high_mem_resource.

I just feel this way may be easy and better. And even, this way also can help terminate the preexisting allocation failures, right?

Thanks
Tiejun

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.