[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] p2m: p2m_mmio_direct set RW permissions



>>> On 26.01.15 at 17:57, <elena.ufimtseva@xxxxxxxxxx> wrote:
> On Fri, Jan 23, 2015 at 10:50:23AM +0000, Jan Beulich wrote:
>> >>> On 22.01.15 at 18:34, <elena.ufimtseva@xxxxxxxxxx> wrote:
>> > (XEN)  00000000d56f0000 - 00000000d5fff000 (reserved)
>> 
>> So this is where one of the RMRRs sits in (and also where
>> the faults occur according to the two numbers you sent
>> earlier, which - as others have already said - is an indication
>> of the reported RMRRs being incomplete), ...
>> 
>> > (XEN)  00000000d5fff000 - 00000000d6000000 (usable)
>> > (XEN)  00000000d7000000 - 00000000df200000 (reserved)
>> 
>> ... and this is the exact range of the other one. But the usable
>> entry between them is a sign of the firmware not doing the
>> best job in assigning resources.
>> 
>> I don't, btw, think that blindly mapping all the reserved regions
>> into PVH Dom0's P2M would be (or is, if that's what's happening
>> today) correct - these regions are named reserved for a
>> reason. In the case here it's actually RAM, not MMIO, and
>> Dom0 (as well as Xen) has no business accessing these (for others
>> this may be different, e.g. the LAPIC and IO-APIC ones below,
>> but Xen learns/knows of them by means different from looking
>> at the memory map).
> 
> I understand this this. At the same time I think pv dom0 does exactly
> this blind mapping. I also tried to map these regions as read-only and
> that worked. Can be this an option for these ram regions?

No - they're reserved, so there shouldn't be _any_ access to them.
The only possible workaround I see as acceptable would be the
already proposed addition of a command line option allowing to
specify additional RMRR-like regions.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.