[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES for ioreq server



>>> On 07.07.15 at 16:49, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
> On 7/7/2015 10:43 PM, Jan Beulich wrote:
>>>>> On 07.07.15 at 16:30, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
>>> I know that George and you have concerns about the differences
>>> between MMIO and guest page tables, but I do not quite understand
>>> why. :)
>>
>> But you read George's very nice description of the differences? I
>> ask because if you did, I don't see why you re-raise the question
>> above.
>>
> 
> Well, yes. I guess you mean this statement:
> "the former is one or two actual ranges of a significant size; the
> latter are (apparently) thousands of ranges of one page each."?
> But I do not understand why this is abusing the io range interface.
> Does the number matters so much? :)

Yes, we specifically set it that low so misbehaving tool stacks
(perhaps de-privileged) can't cause the hypervisor to allocate
undue amounts of memory for tracking these ranges. This
concern, btw, applies as much to the rb-rangesets.

Plus the number you bump MAX_NR_IO_RANGES to is - as I
understood it - obtained phenomenologically, i.e. there's no
reason not to assume that some bigger graphics card may
need this to be further bumped. The current count is arbitrary
too, but limiting guests only in so far as there can't be more
than so many (possibly huge) MMIO ranges on the complete set
of devices passed through to it.

And finally, the I/O ranges are called I/O ranges because they
are intended to cover I/O memory. RAM clearly isn't I/O memory,
even if it may be accessed directly by devices.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.