[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount safe to assign



> -----Original Message-----
> From: Julien Grall <julien@xxxxxxx>
> Sent: 06 February 2020 10:04
> To: Durrant, Paul <pdurrant@xxxxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Jan Beulich <jbeulich@xxxxxxxx>; Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxxxxx>;
> Ian Jackson <ian.jackson@xxxxxxxxxxxxx>; Konrad Rzeszutek Wilk
> <konrad.wilk@xxxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei
> Liu <wl@xxxxxxx>; Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>; Roger
> Pau Monné <roger.pau@xxxxxxxxxx>
> Subject: Re: [PATCH v9 3/4] mm: make pages allocated with MEMF_no_refcount
> safe to assign
> 
> Hi,
> 
> I am sorry to jump that late in the conversation.
> 
> On 03/02/2020 10:56, Paul Durrant wrote:
> >
> > -        if ( unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 <<
> order)) )
> > +    if ( !(memflags & MEMF_no_refcount) &&
> > +         unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 <<
> order)) )
> >               get_knownalive_domain(d);
> > -    }
> >
> >       for ( i = 0; i < (1 << order); i++ )
> >       {
> >           ASSERT(page_get_owner(&pg[i]) == NULL);
> > -        ASSERT(!pg[i].count_info);
> >           page_set_owner(&pg[i], d);
> >           smp_wmb(); /* Domain pointer must be visible before updating
> refcnt. */
> > -        pg[i].count_info = PGC_allocated | 1;
> > +        pg[i].count_info =
> > +            (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
> 
> This is technically incorrect because we blindly assume the state of the
> page is inuse (which is thankfully equal to 0).

Assuming the page is inuse seems reasonable at this point.

> 
> See the discussion [1]. This is already an existing bug in the code base
> and I will be taking care of it.

Fair enough; it's a very long standing bug.

> However...
> 
> >           page_list_add_tail(&pg[i], &d->page_list);
> >       }
> >
> > @@ -2315,11 +2338,6 @@ struct page_info *alloc_domheap_pages(
> >
> >       if ( memflags & MEMF_no_owner )
> >           memflags |= MEMF_no_refcount;
> > -    else if ( (memflags & MEMF_no_refcount) && d )
> > -    {
> > -        ASSERT(!(memflags & MEMF_no_refcount));
> > -        return NULL;
> > -    }
> >
> >       if ( !dma_bitsize )
> >           memflags &= ~MEMF_no_dma;
> > @@ -2332,11 +2350,23 @@ struct page_info *alloc_domheap_pages(
> >                                     memflags, d)) == NULL)) )
> >            return NULL;
> >
> > -    if ( d && !(memflags & MEMF_no_owner) &&
> > -         assign_pages(d, pg, order, memflags) )
> > +    if ( d && !(memflags & MEMF_no_owner) )
> >       {
> > -        free_heap_pages(pg, order, memflags & MEMF_no_scrub);
> > -        return NULL;
> > +        if ( memflags & MEMF_no_refcount )
> > +        {
> > +            unsigned long i;
> > +
> > +            for ( i = 0; i < (1ul << order); i++ )
> > +            {
> > +                ASSERT(!pg[i].count_info);
> > +                pg[i].count_info = PGC_extra;
> 
> ... this is pursuing the wrongness of the code above and not safe
> against offlining.
> 
> We could argue this is an already existing bug, however I am a bit
> unease to add more abuse in the code. Jan, what do you think?
> 

I'd consider a straightforward patch-clash. If this patch goes in after yours 
then it needs to be modified accordingly, or vice versa.

  Paul

> > +            }
> > +        }
> > +        if ( assign_pages(d, pg, order, memflags) )
> > +        {
> > +            free_heap_pages(pg, order, memflags & MEMF_no_scrub);
> > +            return NULL;
> > +        }
> >       }
> 
> Cheers,
> 
> [1] https://lore.kernel.org/xen-devel/20200204133357.32101-1-
> julien@xxxxxxx/
> 
> --
> Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.