[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v2 3/3] tools, libxl: handle the iomem parameter with the memory_mapping hcall



On ven, 2014-03-14 at 09:46 +0000, Ian Campbell wrote:
> On Thu, 2014-03-13 at 19:37 +0100, Dario Faggioli wrote:

> > Exactly, that may very well be the case. It may not, in Arianna's case,
> 
> Does Arianna's OS support device tree then? I had thought not. 
>
It does not. The "it's not the case" was referred to the hardcoded
addresses of the IO registers, as she's for now using the 1:1 mapping.
But as I said, that can change, and that's why I was arguing in favor of
your solution of being able to specify a PFN too.

> If it
> doesn't support device tree then it is necessarily tied to a specific
> version of Xen, since in the absence of DT it must hardcode some
> addresses.
> 
That's fine, at least for now, for that OS, as well as, I'm quite sure,
for many embedded OSes, should they want to run on Xen.

> > but it well can be true for others, or even for her, in future.
> > 
> > Therefore, I keep failing to see why to prevent this to be the case.
> 
> Prevent what to be the case?
> 
Julien was saying he does not want the extended "iomem=[MFN,NR@PFN]"
syntax. In particular, he does not want the "@PFN" part, which I happen
to like. :-)

> > In Arianna's case, it think it would be more than fine to implement it
> > that way, and call it from within the OS, isn't this the case, Arianna?
> 
> It's certainly an option, and it would make a lot of the toolstack side
> issues moot but I'm not at all sure it is the right answer. In
> particular although it might be easy to bodge a mapping into many OSes I
> can imagine getting such a think into something generic like Linux might
> be more tricky -- in which case perhaps the toolstack should be taking
> care of it, and that does have a certain appeal from the simplicity of
> the guest interface side of things.
> 
If the toolstack needs to support iomem, yes.

A way to see it could be that, as of now, we have embedded OSes asking
for it already, and, at the same time, it's unlikely that more generic
system such as Linux would want something similar, for one because
they'll have full DT/ACPI support for this.

The fact that, if what you say below is true, "iomem" does not work at
all, and no one complained from the Linux world so far, seems to me to 

> > One thing I don't see right now is, in the in-kernel case, what we
> > should do when finding the "iomem=[]" option in a config file.
> 
> Even for an x86 HVM guest with iomem there is no call to
> xc_domain_memory_mapping (even from qemu) it is called only for PCI
> passthrough. I've no idea if/how iomem=[] works for x86 HVM guests.
> Based on a cursory glance it looks to me like it wouldn't and if it did
> work it would have the same problems wrt where to map it as we have
> today with the ARM guests, except perhaps on a PC the sorts of things
> you would pass through with this can be done 1:1 because they are legacy
> PC peripherals etc.
> 
Aha, interesting! As said to Julien, I tried to look for how these iomem
regions get to the xc_domain_memory_map in QEMU and I also found nothing
(except for PCI passthrough stuff)... I was assuming I was missing
something... apparently, I wasn't. :-)

I can try (even just out of curiosity!) to dig in the tree and see what
happened with this feature...

BTW, doesn't this make this discussion even more relevant? I mean, if
it's there but it's not working, shouldn't we make it work (for some
definition of "work"), if we decide it's useful, or kill it, if not?

> I think we just don't know what the right answer is yet and I'm unlikely
> to be able to say directly what the right answer is. I'm hoping that
> people who want to use this low level functionality can provide a
> consistent story for how it should work (a "design" if you like) to
> which I can say "yes, that seems sensible" or "hmm, that seems odd
> because of X". At the moment X is "the 1:1 mapping seems undesirable to
> me". There have been some suggestions for how to fix that, someone with
> a horse in the race should have a think about it and provide an
> iteration on the design until we are happy with it.
> 
Again, I'll look more into the history of this feature.

In the meantime, the man page entry says:

"Allow guest to access specific hardware I/O memory pages.
B<IOMEM_START> is a physical page number. B<NUM_PAGES> is the number of
pages beginning with B<START_PAGE> to allow access. Both values must be
given in hexadecimal.

It is recommended to use this option only for trusted VMs under
administrator control."

For one, the "Allow guest to access" there leaves a lot of room for
interpretation, I think. It doesn't say anything about mapping, so one
can interpret it as 'this only grant you mapping permission, up to you
to map'. However, it says "access", so one can interpret it as 'if I can
access it, it's mapped already'.

Wherever this discussion will land, I think that, if we keep this
option, we should make the documentation less ambiguous.

That being said, allow me to play, in the rest of this e-mail, the role
of one which expects the mapping to be actually instated by just
specifying "iomem=[]" in the config file, which is (correct me guys if
I'm wrong) what Eric, Viktor and Arianna thought when reading the entry
above.

So:
 1. it says nothing about where the mapping ends up in guest's memory,
 2. it looks quite clear (to me?) that this is a raw/low level feature,
    to be used under controlled conditions (which an embedded product
    often is)

To me, a legitimate use case is this: I want to run version X of my non
DT capable OS on version Z of Xen, on release K of board B. In such
configuration, the GPIO controller is at MFN 0x000abcd, and I want only
VM V to have direct access to it (board B dos not have an IOMMU).

I would also assume that one is in full control of the guest address
space, so it is be ok to hardcode the addresses of registers somewhere.
Of course, that may be an issue, when putting Xen underneath, as Xen's
mangling with the such address space can cause troubles.

Arianna, can you confirm that this is pretty much the case of Erika, or
amend what I did get wrong?

I certainly don't claim to have the right answer but, in the described
scenario, either:
 1) the combination of iomem=[ MFN,NR@PFN ]", defaulting to 1:1 if  
    "@PFN is missing, and e820_host
 2) calling (the equivalent of) XEN_DOMCTL_memory_map from the guest 
    kernel

would be good solutions, to the point that I think we could even support
both. The main point being that, I think, even in the worst case, any
erroneous usage of either, would "just" destroy the guest, and that's
acceptable.

Actually, if going for 1), I think (when both the pieces are there)
things should be pretty safe. Furthermore, implementing 1) seems to me
the only way of having the "iomem=[]" parameter causing both permissions
granting and mapping. Downside is (libxl side of) the implementation
indeed looks cumbersome.

If going for _only_ 2), then "iomem=[]" would just be there to ensure
the future mapping operation to be successful, i.e., for granting
mapping rights, as it's doing right now. It would be up to the guest
kernel to make sure the MFN it is trying to map are consistent with what
was specified in "iomem=[]". Given the use case we're talking about, I
don't think this is an unreasonable request, as far as we make the iomem
man entry more clearly stating this.

Again, Arianna, do you confirm both solution are possible for you?

I certainly agree that the thread could benefit from some opinion from
people actually wanting to use this. In addition to Arianna, I have
pinged Eirc and Viktor repeatedly... let's see if they have time to let
us all know something more about their own needs and requirements wrt
this.

> > Also, just trying to recap, for Arianna's sake, moving the
> > implementation of the DOMCTL in common code (and implementing the
> > missing bits to make it works properly, of course) is still something we
> > want, right?
> 
> *If* we go the route of having the kernel make the mapping then there is
> no need, is there?
> 
Yeah, well, me not being sure is the reason I asked... And Julien said
he thinks we still want it... :-P

As I was saying above, I think there is room for both, but I don't mind
picking up one. However, if we want to fix iomem=[] and go as far as
having it doing the mapping, then I think we all agree we need the
DOMCTL.

So, looks like the discussion resolves to something like:
 - do we need the DOMCTL for other purposes than iomem=[] ?
 - if no, what do we want to do with iomem=[] ?

Sorry to have brought you all deep down into this can of worms! :-/

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.