[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Thu, 18 Jul 2019 13:12:23 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=C0puy6EN6RI1btYvBm9rKZFesWiDxxDH2vTnmweYDac=; b=O7S2qVUoSjJTT7LfqmeznDGpkpG+1paN5ERwG+7S1VqnYsASSTak5AItQ1lPQP3weDfUWY9QNTevtvlpJ5Yf1pW5LSBaCydrGle+2mVFAer6Rrfojoxa89DujGX92juK/1BjC7HnluQxtgRhpnDP2nMmUuMMgVtAH6mJes86G0Y1a2phZZV2Yu8GeUOTZ99ld1TkeM2ecqsryWp5wJP92yrETKKofIhBGHBwa9EGyPNyImmxxuYNHm3vpk24sGSgu8pT346PBbxGUpQl4bStA4jsT7UefX/nIwAdXit3AYWnGLE1Pf/vhApNm2OxYhk9uix5p/pliLnn7zlSCUP3Qw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SVBAz3Ae0ziNbBH48w9KWLjk0NxeCbRflthROi0P+W0UvtsgAejtjzjhwGEsxTeu++6bx8tw7euhn+L33UtMpQY9fcrC369VwPunvk/MiVif23BHUTn7b3HoxnTXukQr3nIhfP64/nfCGuwk6WPDchBuUXS1iPxbdOsA7siwr+cfnws5asNZua+lLzpySVuMJQWkeU4OKd6C1C+v8gut7T5PUAtzBdFVJjsLDZcVniMcpDJg/pd/F+Zt4NEBSFjb5ruZX2omkQFSUgrq0Wkym6ZcojsSGY+FbYPHMGRipULlA9g3mxFvAkCyhfvm1EGYXPjfZiG4XFAAYAja32fZGg==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 18 Jul 2019 13:13:50 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVPNaoMN4/6VqHlEm91bwrx18tx6bQMluAgAAkhnuAAARMgA==
  • Thread-topic: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released

On 18.07.2019 14:55, Tamas K Lengyel wrote:
> On Thu, Jul 18, 2019 at 4:47 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
>> On 17.07.2019 21:33, Tamas K Lengyel wrote:
>>> @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, 
>>> shr_handle_t sh,
>>>        p2m_type_t smfn_type, cmfn_type;
>>>        struct two_gfns tg;
>>>        struct rmap_iterator ri;
>>> +    unsigned long put_count = 0;
>>>
>>>        get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
>>>                     cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
>>> @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, 
>>> shr_handle_t sh,
>>>            goto err_out;
>>>        }
>>>
>>> -    /* Acquire an extra reference, for the freeing below to be safe. */
>>> -    if ( !get_page(cpage, dom_cow) )
>>> -    {
>>> -        ret = -EOVERFLOW;
>>> -        mem_sharing_page_unlock(secondpg);
>>> -        mem_sharing_page_unlock(firstpg);
>>> -        goto err_out;
>>> -    }
>>> -
>>>        /* Merge the lists together */
>>>        rmap_seed_iterator(cpage, &ri);
>>>        while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
>>> @@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t sgfn, 
>>> shr_handle_t sh,
>>>             * Don't change the type of rmap for the client page. */
>>>            rmap_del(gfn, cpage, 0);
>>>            rmap_add(gfn, spage);
>>> -        put_page_and_type(cpage);
>>> +        put_count++;
>>>            d = get_domain_by_id(gfn->domain);
>>>            BUG_ON(!d);
>>>            BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
>>>            put_domain(d);
>>>        }
>>>        ASSERT(list_empty(&cpage->sharing->gfns));
>>> +    BUG_ON(!put_count);
>>>
>>>        /* Clear the rest of the shared state */
>>>        page_sharing_dispose(cpage);
>>> @@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, 
>>> shr_handle_t sh,
>>>
>>>        /* Free the client page */
>>>        put_page_alloc_ref(cpage);
>>> -    put_page(cpage);
>>> +
>>> +    while ( put_count-- )
>>> +        put_page_and_type(cpage);
>>>
>>>        /* We managed to free a domain page. */
>>>        atomic_dec(&nr_shared_mfns);
>>> @@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
>>>        {
>>>            if ( !last_gfn )
>>>                mem_sharing_gfn_destroy(page, d, gfn_info);
>>> -        put_page_and_type(page);
>>> +
>>>            mem_sharing_page_unlock(page);
>>> +
>>>            if ( last_gfn )
>>> -        {
>>> -            if ( !get_page(page, dom_cow) )
>>> -            {
>>> -                put_gfn(d, gfn);
>>> -                domain_crash(d);
>>> -                return -EOVERFLOW;
>>> -            }
>>>                put_page_alloc_ref(page);
>>> -            put_page(page);
>>> -        }
>>> +
>>> +        put_page_and_type(page);
>>>            put_gfn(d, gfn);
>>>
>>>            return 0;
>>
>> ... this (main, as I guess by the title) part of the change? I think
>> you want to explain what was wrong here and/or why the new arrangement
>> is better. (I'm sorry, I guess it was this way on prior versions
>> already, but apparently I didn't notice.)
> 
> It's what the patch message says - calling put_page_and_type before
> mem_sharing_page_unlock can cause a deadlock. Since now we are now
> holding a reference to the page till the end there is no need for the
> extra get_page/put_page logic when we are dealing with the last_gfn.

The title says "reorder" without any "why".

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.