[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 04/16] x86/xen: hypercall support for xenhost_t


  • To: Juergen Gross <jgross@xxxxxxxx>, Ankur Arora <ankur.a.arora@xxxxxxxxxx>, <linux-kernel@xxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 14 Jun 2019 09:00:41 +0100
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: pbonzini@xxxxxxxxxx, boris.ostrovsky@xxxxxxxxxx, sstabellini@xxxxxxxxxx, joao.m.martins@xxxxxxxxxx, konrad.wilk@xxxxxxxxxx
  • Delivery-date: Fri, 14 Jun 2019 08:00:56 +0000
  • Ironport-sdr: lIK8uRyv3J9C8Tab6kOP2ulS71S5RTNGfnFsGUL07XbS+821uBhx7jt9xR3JgO8RLAn43MYqT0 yYjySklJiQ59Sb12ApaM+zOvobdyE3jlGrzLgoYMKeD/SLmJok9pxRz3Gteu5cTtZh9g+4XA0z gPeDChkEBC8+64hV4VCOfeXK2HJNRnnceJG0rpKF9kq5K/FFDM6GqOPc6XJJ/9xlq4eaL0Njb8 U3c6dYGyeykWQtUx4H3CJAo5acH6k0Ujtsw/1KbBYrwq6JuPTnx8x+lyEWm40MkL3dkS6EWgKt o50=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 14/06/2019 08:35, Juergen Gross wrote:
> On 14.06.19 09:20, Ankur Arora wrote:
>> On 2019-06-12 2:15 p.m., Andrew Cooper wrote:
>>> On 09/05/2019 18:25, Ankur Arora wrote:
>>>> Allow for different hypercall implementations for different xenhost
>>>> types.
>>>> Nested xenhost, which has two underlying xenhosts, can use both
>>>> simultaneously.
>>>>
>>>> The hypercall macros (HYPERVISOR_*) implicitly use the default
>>>> xenhost.x
>>>> A new macro (hypervisor_*) takes xenhost_t * as a parameter and
>>>> does the
>>>> right thing.
>>>>
>>>> TODO:
>>>>    - Multicalls for now assume the default xenhost
>>>>    - xen_hypercall_* symbols are only generated for the default
>>>> xenhost.
>>>>
>>>> Signed-off-by: Ankur Arora <ankur.a.arora@xxxxxxxxxx>
>>>
>>> Again, what is the hypervisor nesting and/or guest layout here?
>> Two hypervisors, L0 and L1, and the guest is a child of the L1
>> hypervisor but could have PV devices attached to both L0 and L1
>> hypervisors.
>>
>>>
>>> I can't think of any case where a single piece of software can
>>> legitimately have two hypercall pages, because if it has one working
>>> one, it is by definition a guest, and therefore not privileged
>>> enough to
>>> use the outer one.
>> Depending on which hypercall page is used, the hypercall would
>> (eventually) land in the corresponding hypervisor.
>>
>> Juergen elsewhere pointed out proxying hypercalls is a better approach,
>> so I'm not really considering this any more but, given this layout, and
>> assuming that the hypercall pages could be encoded differently would it
>> still not work?
>
> Hypercalls might work, but it is a bad idea and a violation of layering
> to let a L1 guest issue hypercalls to L0 hypervisor, as those hypercalls
> could influence other L1 guests and even the L1 hypervisor.
>
> Hmm, thinking more about it, I even doubt those hypercalls could work in
> all cases: when issued from a L1 PV guest the hypercalls would seem to
> be issued from user mode for the L0 hypervisor, and this is not allowed.

That is exactly the point I was trying to make.

If L2 is an HVM guest, then both its hypercall pages will be using
VMCALL/VMMCALL which will end up making hypercalls to L1, rather than
having one go to L0.

If L2 is a PV guest, then one hypercall page will be SYSCALL/INT 82
which will go to L1, and one will be VMCALL/VMMCALL which goes to L0,
but L0 will see it from ring1/ring3 and reject the hypercall.

However you nest the system, every guest only has a single occurrence of
"supervisor software", so only has a single context that will be
tolerated to make hypercalls by the next hypervisor up.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.