[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Question about xenpage_list


  • To: Tamas K Lengyel <tamas.k.lengyel@xxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 28 Aug 2019 22:11:32 +0100
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Wed, 28 Aug 2019 21:11:55 +0000
  • Ironport-sdr: b5kK8GXmKfsDI21lExkwUPo2mU+Pe7C7bQsq65FP3IFnaI6CcOUvmQf4kANP5mph3j3sG2JcET VYKN518GTH1BNOCFE51YfQLi03JLtcHZFx8rgH1ouDzANP1V1sSegWZ1ch8aFFMW9W5TVdPvVl ScxzoS7Pl6x9qEO+VKCu7gmkZiIbKNsK1b02lymls4tPR37bhmuQ6+N1UT15IkS1YtPAywnKu1 zevWDKFZT/2EMo7kx8ckfp0mWtb9reHn46s+COfqR6TM8WIv6aCEbLm8hYbAn36aSvoHWi+02A zrM=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 28/08/2019 18:35, Tamas K Lengyel wrote:
> On Wed, Aug 28, 2019 at 11:16 AM Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 28/08/2019 18:07, Tamas K Lengyel wrote:
>>> On Wed, Aug 28, 2019 at 10:55 AM Andrew Cooper
>>> <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> On 28/08/2019 17:25, Tamas K Lengyel wrote:
>>>>> On Wed, Aug 28, 2019 at 9:54 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>>> On 28.08.2019 17:51, Tamas K Lengyel wrote:
>>>>>>> On Wed, Aug 28, 2019 at 9:35 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>>>>> On 28.08.2019 17:28, Tamas K Lengyel wrote:
>>>>>>>>> Hi all,
>>>>>>>>> I'm trying to track down how a call in common/grant_table.c to
>>>>>>>>> share_xen_page_with_guest will actually populate that page into the
>>>>>>>>> guest's physmap.
>>>> share_xen_page_with_guest() is perhaps poorly named.  It makes the page
>>>> able to be inserted into the guests p2m.
>>>>
>>>> It is internal accounting, so that the permission checks in a subsequent
>>>> add_to_physmap() call will pass.
>>>>
>>>> Perhaps it should be named "allow_guest_access_to_frame()" or similar.
>>>>
>>>>>>>>>  Immediately after the call the page doesn't seem to
>>>>>>>>> be present in the physmap, as share_xen_page_with_guest will just add
>>>>>>>>> the page to the domain's xenpage_list linked-list:
>>>>>>>>>
>>>>>>>>>         unsigned long mfn;
>>>>>>>>>         unsigned long gfn;
>>>>>>>>>
>>>>>>>>>         share_xen_page_with_guest(virt_to_page(gt->shared_raw[i]), d, 
>>>>>>>>> SHARE_rw);
>>>>>>>>>
>>>>>>>>>         mfn = virt_to_mfn(gt->shared_raw[i]);
>>>>>>>>>         gfn = mfn_to_gmfn(d, mfn);
>>>>>>>>>
>>>>>>>>>         gdprintk(XENLOG_INFO, "Sharing %lx -> %lx with domain %u\n",
>>>>>>>>> gfn, mfn, d->domain_id);
>>>>>>>>>
>>>>>>>>> This results in the following:
>>>>>>>>>
>>>>>>>>> (XEN) grant_table.c:1820:d0v0 Sharing ffffffffffffffff -> 42c71e with 
>>>>>>>>> domain 1
>>>>>>>>>
>>>>>>>>> AFAICT the page only gets populated into the physmap once the domain
>>>>>>>>> gets unpaused. If I let the domain run and then pause it I can see
>>>>>>>>> that the page is in the guest's physmap (by looping through all the
>>>>>>>>> entries in its EPT):
>>>>>>>>>
>>>>>>>>> (XEN) mem_sharing.c:1636:d0v0 0xf2000 -> 0x42c71e is a grant mapping
>>>>>>>>> shared with the guest
>>>>>>>> This should be the result of the domain having made a respective
>>>>>>>> XENMAPSPACE_grant_table request, shouldn't it?
>>>>>>>>
>>>>>>> Do you mean the guest itself or the toolstack?
>>>>>> The guest itself - how would the tool stack know where to put the
>>>>>> frame(s)?
>>>>> I don't think that makes sense. How would a guest itself now that it
>>>>> needs to map that frame? When you restore the VM from a savefile, it
>>>>> is already running and no firmware is going to run in it to initialize
>>>>> such GFNs.
>>>>>
>>>>> As for the toolstack, I see calls to xc_dom_gnttab_seed from the
>>>>> toolstack during domain creation (be it a new domain or one being
>>>>> restored from a save file) which does issue a XENMEM_add_to_physmap
>>>>> with the space being specified as XENMAPSPACE_grant_table. Looks like
>>>>> it gathers the GFN via xc_core_arch_get_scratch_gpfn. So it looks like
>>>>> that's how its done.
>>>> On domain creation, the toolstack needs to write the store/console grant
>>>> entry.
>>>>
>>>> If XENMEM_acquire_resource is available and usable (needs newish Xen and
>>>> dom0 kernel), then that method is preferred.  This lets the toolstack
>>>> map the grant table frame directly, without inserting it into the guests
>>>> p2m first.
>>>>
>>>> The fallback path is to pick a free pfn, insert it into the guest
>>>> physmap, foreign map it, write the entries, unmap and remove from the
>>>> guest physmap.  This has various poor properties like shattering
>>>> superpages for the guest, and a general inability to function correctly
>>>> once the guest has started executing and has a balloon driver in place.
>>>>
>>>> At a later point, once the guest starts executing, a grant-table aware
>>>> part of the kernel ought to map the grant table at the kernels preferred
>>>> location and keep it there permanently.
>>>>
>>> OK, makes sense, but when the guest is being restored from a savefile,
>>> how does it know that it needs to do that mapping again? That frame is
>>> being re-created during restoration, so when the guest starts to
>>> execute again it would just have a whole where that page used to be.
>> This is where we get to the problems of Xen's "migration" not being
>> transparent.  Currently it is the requirement of the guest kernel to
>> remap the grant table on resume.
>>
>> This is a reasonable requirement for PV guests.  Because PV guest
>> kernels maintain their own P2M, it is impossible to migrate transparently.
>>
>> This should never have made it into the HVM ABI, but it did and we're a
>> decade too late, and only just starting to pick up the pieces.
>>
>> I presume you're doing some paging work here, and are logically
>> restoring a guest without its knowledge?
>>
> Correct, I'm creating a VM by populating its physmap with mem_shared
> entries from another domain that's paused by looping through all pages
> and memsharing them. Pages that are not sharable I manually allocate
> new pages for and copy them over (or simply plug the GFN in with
> INVALID_MFN if the type is such that it allows that). Currently this
> works fine when the domain I'm populating from was just restored from
> a savefile, including launching the toolstack for the new domain and
> interacting with its VNC/network/etc. But I'm running into trouble
> when the domain I'm copying from was unpaused before this sharing is
> taking place. Evidently unpausing the domain introduces discrepancies
> in its memory space as compared to when its just restored and this
> grant mapping is one page that pops up as being mapped in but it's
> unsharable since it's a PGC_xen_heap page. The new domain does have
> the page allocated already but it doesn't have a GFN yet, so I can't
> copy its content over since I have no way to ensure that MFN will be
> used for the same GFN. There are other pages that also pop up as now
> not being shared, but seems like running a domain_soft_reset on the
> domain makes them go away (resetting event channels in particular). So
> anyway, I'm having a hard time figuring out what changes are made to
> the domain after it is unpaused, as I need to revert them all to make
> the domain be in the same state it would be when restored from the
> savefile.

Its not safe to blindly copy any xenheap page, irrespective of whether
they are present in the p2m or not.

For example, grants which are mapped in paused domain need to appear as
unmapped in the cloned domain, or Xen is going to object to the the
state it finds the new grant table in.

It sounds as if you want an explicitly "duplicate xenheap pages" step
which isn't a straight forward memcpy.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.