[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 7/7] tools, libxl: handle the iomem parameter with the memory_mapping hcall



On Tue, 2014-04-01 at 16:26 +0100, Julien Grall wrote:
> On 1 April 2014 16:13, Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx> wrote:
> > On Tue, 2014-03-25 at 03:02 +0100, Arianna Avanzini wrote:
> >> Currently, the configuration-parsing code concerning the handling of the
> >> iomem parameter only invokes the XEN_DOMCTL_iomem_permission hypercall.
> >> This commit lets the XEN_DOMCTL_memory_mapping hypercall be invoked
> >> after XEN_DOMCTL_iomem_permission when the iomem parameter is parsed
> >> from a domU configuration file, so that the address range can be mapped
> >> to the address space of the domU. The hypercall is invoked only in case
> >> of domains using an auto-translated physmap.
> >
> > Sorry for not noticing this sooner but I've just been looking at this
> > again and it seems that XEN_DOMCTL_memory_mapping is a superset of
> > XEN_DOMCTL_iomem_permission.
> >
> > AFAICT XEN_DOMCTL_memory_mapping does exactly the same
> > iomem_{permit,deny}_access as XEN_DOMCTL_iomem_permission and then iff
> > the guest is paging_mode_translate sets up a p2m mapping for it.
> > (There's also some extra debug logging, lets ignore it).
> >
> > IOW could the toolstack's existing call to XEN_DOMCTL_iomem_permission
> > not be completely replaced with a call to XEN_DOMCTL_memory_mapping and
> > have exactly the same affect as this patch, without the need for the
> > toolstack to infer the paging mode of the guest?
> >
> > I think the answer is yes, can someone confirm?
> 
> For x86 HVM, AFAIU only QEMU knows the memory layout of the guest.
> So we can't call XEN_DOMCTL_memory_mapping here (at least map
> the range in the p2m).

This usecase is the iomem= in the config file, which qemu isn't involved
in. So using this option on x86 HVM guests has basically the same issues
as using it on ARM guests regarding the need to understand what the
hypervisor (including qemu) is doing with the guest address space.

QEMU only uses this hypercall for PCI passthrough, which is a different
option in the guest cfg file.

> > One subtle distinction is that it appears that XEN_DOMCTL_memory_mapping
> > cannot grant access to mfns for which it does not it self have access.
> > That seems reasonable though.
> >
> > In fact the fact that XEN_DOMCTL_iomem_permission does not make this
> > check could be a security issue -- a domain with permission to build
> > domains could construct a sock puppet domain which it could give access
> > to ports which it cannot see. Or maybe this is deliberate and isolates
> > the builder domain from needing h/w permissions, in which case is
> > XEN_DOMTL_memory_mapping wrong? Daniel?
> 
> I think XEN_DOMCTL_memory_mapping is correct (and therefore
> XEN_DOMCTL_iomem_permission) wrong. It make senses which the
> builder domain patch series from Daniel:
> see http://lists.xen.org/archives/html/xen-devel/2014-03/msg03553.html
> 
> > [0] which I am mentioning openly since it is listed in
> > docs/misc/xsm-flask.txt as being an interface where we will handle issue
> > publicly.
> 
> There is no reference of [0] in the mail. I guess you were talking
> about the last
> paragraph? :)

oops, yes. It went after "security issue".

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.