[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 3/16]: PVH xen: Add PHYSDEVOP_map_iomem



>>> On 24.01.13 at 03:12, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> On Wed, 16 Jan 2013 09:45:07 +0000
> "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> 
>> >>> On 16.01.13 at 00:35, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
>> >>> wrote:
>> > On Mon, 14 Jan 2013 11:23:42 +0000 "Jan Beulich"
>> > <JBeulich@xxxxxxxx> wrote:
>> >> >>> On 12.01.13 at 02:32, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
>> >> >>> wrote:
>> >> > In this patch, we define PHYSDEVOP_map_iomem and add support for
>> >> > it. Also, XEN_DOMCTL_memory_mapping code is put into a function
>> >> > so it can be shared later for PVH. No change in
>> >> > XEN_DOMCTL_memory_mapping functionality. 
>> >> 
>> >> Is that to say that a PVH guest will need to issue this for each
>> >> and every MMIO range? Irrespective of being DomU or Dom0? I would
>> >> have expected that this could be transparent...
>> > 
>> > Hmm.. we discussed this at xen hackathon last year.  The 
>> > guest maps the entire range in one shot. Doing it this way keeps
>> > things flexible for future if EPT size gets to be a problem.
>> 
>> But is this the only way to do this? I.e. is there no transparent
>> alternative? 
> 
> Like what? If you can explain a bit more, I can try to prototype it.
> Are you suggesting you don't want the guest be involved at all in the
> mmio mappings? BTW, we map the entire range at present walking the
> e820.

If you map the entire range anyway, I see even less reason for
Dom0 to do that - the hypervisor could launch Dom0 with all the
ranges mapped then.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.