[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/p2m-pt: tighten conditions of IOMMU mapping updates



On Fri, Oct 2, 2015 at 10:16 AM, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
> On Fri, Oct 02, 2015 at 01:31:37AM -0600, Jan Beulich wrote:
>> >>> George Dunlap <george.dunlap@xxxxxxxxxx> 10/01/15 6:16 PM >>>
>> >On 01/10/15 11:25, Jan Beulich wrote:
>> >> TBD: As already mentioned on the large-page-MMIO-mapping patch, there
>> >>      is an apparent inconsistency with PoD handling: 2M mappings get
>> >>      valid entries created, while 4k mappings don't. It would seem to
>> >>      me that the 4k case needs changing, even if today this may only
>> >>      be a latent bug. Question of course is why we don't rely on
>> >>      p2m_type_to_flags() doing its job properly and instead special
>> >>      case certain P2M types.
>> >
>> >The inconsistency in the conditionals there is a bit strange; but I'm
>> >pretty sure that in the 2MB case it is (at the moment) superfluous,
>> >because at the moment it seems that when setting a page with type
>> >p2m_populate_on_demand, it's always passing in _mfn(0), which is valid.
>> >
>> >(It used to pass a magic MFN, but Tim Deegan switched it to _mfn(0) at
>> >some point without comment.)
>>
>> Perhaps just because the magic MFN didn't always work? Tim?
>> To me it looks wrong to pass anything other than INVALID_MFN
>> there.
>>
>
> I think George and you are talking about another function?  Is there
> anything that prevents this patch from being committed as-is?

No -- I'm just answering Jan's "To Be Done" comment (I assume that's
what TBD means).  He's noticed something strange; but it's been there
for quite a while, and so both by inspection and long testing
(probably a few releases now) works.  No point fiddling with it until
after the release.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.