[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 7/9] xen/arm: Implement virtual-linear page table for guest p2m mapping in live migration



> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@xxxxxxxxxx]
> Sent: Thursday, October 17, 2013 7:06 PM
> To: Jaeyong Yoo
> Cc: xen-devel@xxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v4 7/9] xen/arm: Implement virtual-linear
> page table for guest p2m mapping in live migration
> 
> On Thu, 2013-10-17 at 17:58 +0900, Jaeyong Yoo wrote:
> > > > @@ -1382,6 +1382,84 @@ int is_iomem_page(unsigned long mfn)
> > > >          return 1;
> > > >      return 0;
> > > >  }
> > > > +
> > > > +/* flush the vlpt area */
> > > > +static void flush_vlpt(struct domain *d) {
> > > > +    int flush_size;
> > > > +    flush_size = (d->arch.dirty.second_lvl_end -
> > > > +                  d->arch.dirty.second_lvl_start) << SECOND_SHIFT;
> > > > +    /* flushing the 3rd level mapping */
> > > > +    flush_xen_data_tlb_range_va(VIRT_LIN_P2M_START,
> > > > +                                flush_size);
> > >
> > > Shouldn't the base here be VIRT_LIN_P2M_START +
> > > (d->arch.dirty.second_lvl_start) << SECOND_SHIFT) or something like
> that?
> >
> > Yes, right. It will also decrease the flush overhead.
> >
> > >
> > > flush_xen_data_tlb_range_va just turns into a loop over the
> > > addresses, so you might find you may as well do the flushes as you
> > > update the ptes in the below, perhaps with an optimisation to pull
> > > the barriers outside that loop.
> >
> > You mean the barriers in write_pte? OK. It looks better.
> 
> I meant the ones in flush_xen_data_tlb_range_va, but perhaps the ones in
> write_pte too. I suppose in both cases are __foo variant without the
> barriers could be made so we can do
>       dsb
>       for each page
>               __write_pte
>               __flush_xen_data....
>       dsb
>       isb
> (or whatever the right barriers are!)
> 

Oh, you meant put both write_pte and flushing into one loop.
I got it! No reason for having two loops. 

> > > > +/* setting up the xen page table for vlpt mapping for domain d */
> > > > +void prepare_vlpt(struct domain *d) {
> > > > +    int xen_second_linear_base;
> > > > +    int gp2m_start_index, gp2m_end_index;
> > > > +    struct p2m_domain *p2m = &d->arch.p2m;
> > > > +    struct page_info *second_lvl_page;
> > > > +    vaddr_t gma_start = 0;
> > > > +    vaddr_t gma_end = 0;
> > > > +    lpae_t *first;
> > > > +    int i, j;
> > > > +
> > > > +    xen_second_linear_base =
> second_linear_offset(VIRT_LIN_P2M_START);
> > > > +    get_gma_start_end(d, &gma_start, &gma_end);
> > > > +
> > > > +    gp2m_start_index = gma_start >> FIRST_SHIFT;
> > > > +    gp2m_end_index = (gma_end >> FIRST_SHIFT) + 1;
> > > > +
> > > > +    second_lvl_page = alloc_domheap_page(NULL, 0);
> > >
> > > The p2m first is two concatenated pages with a total of 1024 entries
> > > (which is needed to give the full 40 bit IPA space). I have a
> > > feeling this means you need two pages here?
> > >
> > > Or maybe we shouldn't be supporting the full 40 bit addresses on 32-
> bit?
> >
> > For generality, I think it is better to support it. But, honestly, I'm
> > not sure how many people would use 40 bit ARM guest.
> 
> All it takes is one peripheral at a high address, even if you aren't using
> a large amount of RAM etc.
> 
> I've got a 64-bit system on my desk which has stuff up in the 50 bit range.
> It seems less likely to happen on 32-bit though...

I think I'd better go for generality.

Jaeyong


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.