[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v2 3/3] tools, libxl: handle the iomem parameter with the memory_mapping hcall



On Thu, 2014-03-13 at 19:37 +0100, Dario Faggioli wrote:
> On gio, 2014-03-13 at 17:32 +0000, Ian Campbell wrote:
> > On Thu, 2014-03-13 at 16:47 +0000, Julien Grall wrote:
> 
> > > > Sure, but I don't think I see any conflict between this and the approach
> > > > Ian proposed above. What am I missing?
> > > 
> > > I believe, Ian was assuming that the user know the layout because this
> > > solution will be used to a very specific case (I think mostly when the
> > > device tree won't describe the hardware).
> > 
> > Right, my assumption was that the kernel was also compiled for the exact
> > hardware layout as part of some sort of embedded/appliance situation...
> > 
> Exactly, that may very well be the case. It may not, in Arianna's case,

Does Arianna's OS support device tree then? I had thought not. If it
doesn't support device tree then it is necessarily tied to a specific
version of Xen, since in the absence of DT it must hardcode some
addresses.

> but it well can be true for others, or even for her, in future.
> 
> Therefore, I keep failing to see why to prevent this to be the case.

Prevent what to be the case?

> > > I'm wondering, if we can let the kernel calling the hypercall. He knows
> > > what is the memory layout of the VM.
> > 
> > This would be somewhat analogous to what happens with an x86 PV guest.
> > It would have to be an physmap call or something since this domctl
> > wouldn't be accessible by the guest.
> > 
> > That makes a lot of sense actually since this domctl seems to have been
> > intended for use by the x86 HVM device model (qemu).
> >
> I thought about that too. The reason why this was the taken approach is
> this xen-devel discussion:
> http://lists.xen.org/archives/html/xen-devel/2013-06/msg00870.html
> 
> in particular, this Julien's message:
> http://lists.xen.org/archives/html/xen-devel/2013-06/msg00902.html
> 
> Also, Eric and Viktor where asking for/working on something similar, so
> there perhaps would be some good in having this...
> 
> Eric, Viktor, can you comment why you need this call and how you use, or
> want to use it for?
> Would it be the same for you to have it in the form of a physmap call,
> and invoke it from within the guest kernel?
> 
> In Arianna's case, it think it would be more than fine to implement it
> that way, and call it from within the OS, isn't this the case, Arianna?

It's certainly an option, and it would make a lot of the toolstack side
issues moot but I'm not at all sure it is the right answer. In
particular although it might be easy to bodge a mapping into many OSes I
can imagine getting such a think into something generic like Linux might
be more tricky -- in which case perhaps the toolstack should be taking
care of it, and that does have a certain appeal from the simplicity of
the guest interface side of things.

> One thing I don't see right now is, in the in-kernel case, what we
> should do when finding the "iomem=[]" option in a config file.

Even for an x86 HVM guest with iomem there is no call to
xc_domain_memory_mapping (even from qemu) it is called only for PCI
passthrough. I've no idea if/how iomem=[] works for x86 HVM guests.
Based on a cursory glance it looks to me like it wouldn't and if it did
work it would have the same problems wrt where to map it as we have
today with the ARM guests, except perhaps on a PC the sorts of things
you would pass through with this can be done 1:1 because they are legacy
PC peripherals etc.

I think we just don't know what the right answer is yet and I'm unlikely
to be able to say directly what the right answer is. I'm hoping that
people who want to use this low level functionality can provide a
consistent story for how it should work (a "design" if you like) to
which I can say "yes, that seems sensible" or "hmm, that seems odd
because of X". At the moment X is "the 1:1 mapping seems undesirable to
me". There have been some suggestions for how to fix that, someone with
a horse in the race should have a think about it and provide an
iteration on the design until we are happy with it.

> Also, just trying to recap, for Arianna's sake, moving the
> implementation of the DOMCTL in common code (and implementing the
> missing bits to make it works properly, of course) is still something we
> want, right?

*If* we go the route of having the kernel make the mapping then there is
no need, is there?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.