[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/mm/p2m: don't needlessly limit MMIO mapping order to 4k

>>> On 25.10.18 at 16:36, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 25/10/18 15:28, Jan Beulich wrote:
>>>>> On 17.10.18 at 16:24, <paul.durrant@xxxxxxxxxx> wrote:
>>> --- a/xen/arch/x86/mm/p2m.c
>>> +++ b/xen/arch/x86/mm/p2m.c
>>> @@ -2081,14 +2081,11 @@ static unsigned int mmio_order(const struct domain 
> *d,
>>>                                 unsigned long start_fn, unsigned long nr)
>>>  {
>>>      /*
>>> -     * Note that the !iommu_use_hap_pt() here has three effects:
>>> -     * - cover iommu_{,un}map_page() not having an "order" input yet,
>>> -     * - exclude shadow mode (which doesn't support large MMIO mappings),
>>> -     * - exclude PV guests, should execution reach this code for such.
>>> -     * So be careful when altering this.
>>> +     * PV guests or shadow-mode HVM guests must be restricted to 4k
>>> +     * mappings.
>> Since you've already posted a patch to add order parameters to
>> IOMMU map/unmap, I'd prefer the respective part of the comment
>> to go away only when the order value really can be passed through.
>> This then requires permitting non-zero values only when the IOMMUs
>> also allow for the respective page sizes.
>> I am in particular not convinced that the time needed to carry out
>> the hypercall is going to be low enough even for 512 4k pages: You
>> need to take into account flushes, including those potentially
>> needed for ATS. You don't provide any proof that flushes are
>> always delayed and batched, nor do I think this is uniformly the
>> case.
> I haven't had time to pick this back up since v1.
> The long and the short of it is that we allow order 1G loops for regular
> RAM, even in !shared_pt mode.

Do we? CONFIG_DOMU_MAX_ORDER is 9 for both ARM and x86.
That still allows 2M loops, which - as said - I'm worried about for
the time non-batched TLB flushes take.

> From an "interaction with the IOMMU" point of view, mappings over
> regular RAM are no different to mappings over MMIO, so they should
> behave consistently.

I agree in general.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.