[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 2/6] x86/mem-paging: correct p2m_mem_paging_prep()'s error handling
On 15.05.2020 16:40, Andrew Cooper wrote: > On 23/04/2020 09:37, Jan Beulich wrote: >> @@ -1816,9 +1816,19 @@ int p2m_mem_paging_prep(struct domain *d >> goto out; >> /* Get a free page */ >> ret = -ENOMEM; >> - page = alloc_domheap_page(p2m->domain, 0); >> + page = alloc_domheap_page(d, 0); >> if ( unlikely(page == NULL) ) >> goto out; >> + if ( unlikely(!get_page(page, d)) ) >> + { >> + /* >> + * The domain can't possibly know about this page yet, so >> failure >> + * here is a clear indication of something fishy going on. >> + */ > > This needs a gprintk(XENLOG_ERR, ...) of some form. (which also reminds > me that I *still* need to finish my patch forcing everyone to provide > something to qualify the domain crash, so release builds of Xen get some > hint as to what went on or why.) First of all - I've committed the patch earlier today, with Roger's R-b, and considering that it had been pending for quite some time. As to the specific aspect: >> + domain_crash(d); This already leaves a file/line combination as a (minimal hint). I can make a patch to add a gprintk() as you ask for, but I'm not sure it's worth it for this almost dead code. >> @@ -1843,13 +1852,24 @@ int p2m_mem_paging_prep(struct domain *d >> ret = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, >> paging_mode_log_dirty(d) ? p2m_ram_logdirty >> : p2m_ram_rw, a); >> - set_gpfn_from_mfn(mfn_x(mfn), gfn_l); >> + if ( !ret ) >> + { >> + set_gpfn_from_mfn(mfn_x(mfn), gfn_l); >> >> - if ( !page_extant ) >> - atomic_dec(&d->paged_pages); >> + if ( !page_extant ) >> + atomic_dec(&d->paged_pages); >> + } >> >> out: >> gfn_unlock(p2m, gfn, 0); >> + >> + if ( page ) >> + { >> + if ( ret ) >> + put_page_alloc_ref(page); >> + put_page(page); > > This is a very long way from clear enough to follow, and buggy if anyone > inserts a new goto out path. What alternatives do you see? Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |