[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Design doc of adding ACPI support for arm64 on Xen - version 4
On 2015/8/20 21:46, Roger Pau Monnà wrote: > El 20/08/15 a les 14.29, Shannon Zhao ha escrit: >> >> >> On 2015/8/20 19:28, Roger Pau Monnà wrote: >>> El 20/08/15 a les 13.22, Shannon Zhao ha escrit: >>>> Hi Roger, >>>> >>>> On 2015/8/20 16:20, Roger Pau Monnà wrote: >>>>> El 20/08/15 a les 5.07, Shannon Zhao ha escrit: >>>>>> On 2015/8/19 23:02, Roger Pau Monnà wrote: >>>>>>> El 19/08/15 a les 14.13, Shannon Zhao ha escrit: >>>>>>>> XENMAPSPACE "XENMAPSPACE_dev_mmio". The usage of this hypercall >>>>>>>> parameters: >>>>>>>> - domid: DOMID_SELF. >>>>>>>> - space: XENMAPSPACE_dev_mmio. >>>>>>>> - gpfns: guest physical addresses where the mapping should appear. >>>>>>> >>>>>>> This is not complete, you have forgotten to add the idxs field, >>>>>> >>>>>> Sorry, I didn't use the idx for the mmio region mapping. What's the >>>>>> idx >>>>>> useful for here? >>>>> >>>>> I've already posted this in the previous version, and you agreed on the >>>>> interface and the usage of the fields, please see: >>>>> >>>>> http://marc.info/?l=xen-devel&m=143986236212359 >>>>> >>>>> The idxs field is explicitly mentioned there with it's usage. >>>>> >>>> >>>> Yeah, I said I will add the description of hypercall parameters. >>>> It seems that we are talking about a different parameter. >>>> To map the mmio region, I reuse the struct xen_add_to_physmap and there >>> >>> You should also take into account xen_add_to_physmap_batch (or are you >>> planning to issue an hypercall for every single MMIO page that you want >>> to map?), but anyway the idx(s) field is there in both structs. >>> >> >> Yeah, current approach is to issue an hypercall for every single MMIO >> page. But if we want to batch map MMIO pages, I think it needs the size >> parameter and what's idxs useful for? As we map the MMIO pages 1:1, it >> seems it's unnecessary to check "idxs[i] == gpfns[i]", right? > > This is what I've been trying to say, why do we need to enforce 1:1 > mappings in such way? Is there some kind of technical limitation in ARM > second stage translation that prevents doing non 1:1 mappings for MMIO > regions? > > If for the initial design you need to enforce 1:1 for some reason (which > I'm interested in knowing), why don't you just check idxs[i] == > gpfns[i], this way we can always add support for non 1:1 mappings later > if needed. > > And yes, the "size" parameter in xen_add_to_physmap_batch indicates the > number of 4KB pages present in both the idxs and the gpfns arrays. > >>>> is idx not idxs. Everytime Dom0 maps one page and it's mapped 1:1(guest >>>> physical address is same with real physical hardware address), so it >>>> only needs to tell the hypervisor the gpfn. >>> >>> IMHO, I'm not sure why we should restrict this to 1:1 (although I admit >>> this is going to be the common case). Didn't we are that we are going to >>> allow non 1:1 mapping of MMIO regions? >>> >>> If you want you can check in the hypercall handler that idxs[i] == >>> gpfns[i], and return -EOPNOTSUPP if they don't match, but I still don't >>> see why this should be restricted to 1:1 mappings. >>> >> >> For Dom0 which get the device MMIO information from the DT or ACPI DSDT >> table. To ACPI, we don't (or can't)modify anything in DSDT table. So >> actually the MMIO regions Dom0 gets are the real physical hardware MMIO >> regions and the start address and size of them are same. > > I understand that 1:1 mappings are always going to be used with the > current approach in Linux, but I see no reason to enforce this inside of > Xen. It's not going to add more complexity to the hypercall handler, and > is something that we might want to use in the future. > Ok, I will change this to make it can handle non 1:1 mappings. Let guest decide what kind of mapping it wants to use. -- Shannon _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |