[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] (v2) Design proposal for RMRR fix

On Thu, Jan 15, 2015 at 10:05 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Wed, 2015-01-14 at 18:14 +0000, George Dunlap wrote:
>> On 01/14/2015 03:43 PM, Ian Campbell wrote:
>> > On Wed, 2015-01-14 at 15:39 +0000, George Dunlap wrote:
>> >> Actually, I was just thinking about this -- I'm not really sure why we
>> >> do the PCI MMIO stuff in hvmloader at all.  Is there any reason, other
>> >> than the fact that we need to tell Xen about updates to the physical
>> >> address space?  If not, it seems like doing it in SeaBIOS would make a
>> >> lot more sense, rather than having to maintain duplicate functionality
>> >> in hvmloader.
>> >
>> > I don't remember exactly, but I think it was because something about the
>> > PCI enumeration required reflecting in the ACPI tables, which hvmloader
>> > also provides. Splitting it up was tricky, that was what I initially
>> > tried when adding SeaBIOS support, it turned into a rats nest.
>> Blah. :-(
> It *might* have been more complicated because I was also trying to keep
> ROMBIOS+qemu-trad doing something sensible and worrying about code
> duplication, plus the whole seabios thing was pretty new to me at the
> time as well.
> It probably wouldn't be a waste of time for someone for spend say 1/2 a
> day taking another poke at it (modulo what you said below perhaps making
> it a little moot).

Another option to "solve" xenbug #28 might be actually to just start
by following what appears to be KVM's model -- i.e., rather than
creating a the minimal MMIO hole possible and making it larger (as
hvmloader does), just start with a 0.5 or 1 G hole.  Modifying SeaBIOS
to understand Xen's mmio_hole_size paramater shouldn't be *too* hard;
then we could look at adding memory relocation back in (coming up with
something that works for both qemu-kvm and xen) if/when it turns out
to be necessary.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.