|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Why does xc_map_foreign_range() refuse to map pfns below 1M from a domU
On Wed, 4 Dec 2013 18:16:11 +0100 Tomasz Wroblewski wrote:
> > [...]
> >
> > Not compile tested:
> >
> > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > index ce563be..98efb65 100644
> > --- a/arch/x86/xen/mmu.c
> > +++ b/arch/x86/xen/mmu.c
> > @@ -409,7 +409,8 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
> > if (mfn & IDENTITY_FRAME_BIT) {
> > mfn &= ~IDENTITY_FRAME_BIT;
> > flags |= _PAGE_IOMAP;
> > - }
> > + } else
> > + flags &= _PAGE_IOMAP;
> > }
> > val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
> > }
> > @@ -441,7 +442,7 @@ static pteval_t xen_pte_val(pte_t pte)
> > pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
> > }
> > #endif
> > - if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
> > + if (pteval & _PAGE_IOMAP) /* Set by xen_make_pte for 1:1
> > PFNs. */ return pteval;
> >
> > return pte_mfn_to_pfn(pteval);
> > @@ -498,17 +499,14 @@ static pte_t xen_make_pte(pteval_t pte)
> > #endif
> > /*
> > * Unprivileged domains are allowed to do IOMAPpings for
> > - * PCI passthrough, but not map ISA space. The ISA
> > - * mappings are just dummy local mappings to keep other
> > - * parts of the kernel happy.
> > + * PCI passthrough. _PAGE_SPECIAL is done when user-space
> > uses
> > + * IOCTL_PRIVCMD_MMAP and gives us the MFNs. The
> > _PAGE_IOMAP
> > + * is supplied to use by xen_set_fixmap.
> > */
> > - if (unlikely(pte & _PAGE_IOMAP) &&
> > - (xen_initial_domain() || addr >= ISA_END_ADDRESS)) {
> > + if (unlikely(pte & _PAGE_SPECIAL | _PAGE_IOMAP))
> > pte = iomap_pte(pte);
>
> I think this wont work because _PAGE_SPECIAL is not set at this point
> yet (inside xen_make_pte). It is only set after xen_make_pte. This is
> why my patch contained this extra, rather nasty, hunk, which made
> _PAGE_SPECIAL set a bit earlier:
>
> +static inline pte_t foreign_special_pfn_pte(unsigned long page_nr,
> pgprot_t pgprot) +{
> + return __pte(((phys_addr_t)page_nr << PAGE_SHIFT) |
> + massage_pgprot(pgprot) | _PAGE_SPECIAL);
> +}
> +
> +
> static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
> unsigned long addr, void *data)
> {
> struct remap_data *rmd = data;
> - pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
> + pte_t pte = foreign_special_pfn_pte(rmd->mfn++, rmd->prot);
>
> rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
> rmd->mmu_update->val = pte_val_ma(pte);
>
>
> I've basically made a new function foreign_special_pfn_pte which is
> unrolled pte_mkspecial with a small difference that it sets
> _PAGE_SPECIAL bit before calling __pte, not after (because __pte
> calls into xen_make_pte). Maybe cleanest way of fixing this would be
> just to have separate path for this which doesn't use xen_make_pte at
> all?
This thread has stalled for some time now. Can this last patch from
Tomasz be considered for inclusion and maybe -stable, even if it's not
the _cleanest_ fix?
Thanks,
--
Mihai DonÈu
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |