[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: hypercalls with 64-bit results


  • To: Jan Beulich <jbeulich@xxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Wed, 16 Jun 2021 19:15:53 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=sNpQzDIlnm7MsoEC/S7AiHE4ULsw2UwrHUkFA0POEGw=; b=IZHD3T2oIKqy7BeUrr/uIKfZcAr5yHhsaRcqhk6mebzaauGWlWIiWCLfIgDpHCTx/OsEfAnZwLFqjP/yfbtlzhsBZgMAWsX2K0gm9gTgq2ihsxy25erwu9aOhsKy2myDeAQFQHti+AXxQ9OknS0TRlyXdsb6NmyeNBjOmoNU8Thartes2Mn/OnptFSJF+FLaBCf0PvRXTjrnwaBqqSs0dtoPCp88IN68pJOI2VtyIQlR6l9d7nVZpzd/Hbi0Tru8/VBBOGhsl72fck8ru8cEzyRQYy/9cgkTjaLmdzWt7dL7rlGFnv3uFmdIdElKpxYGaJY529fRFi177yuifvsejw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JnPrBs8mzxggplF32FEN80sBocLA7DQDv1g7MfQrtM1yHvI5j8ogHX6RKdo6+cV/JQitQpZD/z8IUeWmjcwBUaedXzQIdRyn9s0FFJK/fi2AqVYY7AxCtM+xlibgHrp8g1tUMBO+Gwt+srYQeWShL+cS2Hyc50DJOOHuuMAj9hlDSRk+R4LWZBmKNvC8lFp+IZ3IZZdTiKcKw+7AiSIjzjzh91idaq3wk6pE56xoipewbna39h/O3XNqrfJCfRX2QctIFUD3QwqpIney46JgeXXKHB+ZthygODFmvhmJb/dMAXu+FUNgXxGDgVAARoqOMh4IRLr+G3OfamTh0EJTzw==
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Anthony Perard <anthony.perard@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 16 Jun 2021 18:16:28 +0000
  • Ironport-hdrordr: A9a23:LGaQka1wt1H47nllPsMdTgqjBVByeYIsimQD101hICG9Lfb2qy n+ppgmPEHP5Qr5OEtApTiBUJPwJE80hqQFnrX5Wo3SIDUO2VHYUb2KiLGN/9SOIVyHygcw79 YGT0E6MqyLMbEYt7eI3ODbKadY/DDvysnB7o2/vhQdOD2CKZsQizuRYjzrYnGeLzM2Y6bReq DshPav6wDQAkj+Oa+Adwg4tqX41pL2vaOjRSRDKw8s6QGIgz/twLnmEyKA1hNbdz9U278t/U XMjgS8v8yYwrCG4y6Z81WWw4VdmdPnxNcGLMuQivINIjGprgqzfoxuV5CLoThwiuCy71QBls XKvn4bTopOwkKUWlvwjQrm2gHm3jprwWTl00WkjXzqptG8bC4mCuJa7LgpMCfx2g4FhpVRwa hL12WWu958FhXbhhnw4NDOSlVDile0m3w/iuQe5kYvErf2UIUh6bD3wXklV6vpREnBmcYa+a hVfYHhDc9tABanhyuzhBg3/DTENU5DbCtvQSA5y4aoOnZt7ShEJ+Zx/r1Xop46zuNLd3Bz3Z WODk1ZrsA7ciYoV9MKOA4ge7r7NoWfe2OBDIqtSW6XXJ3vbEi91aIfpo9Fv92XRA==
  • Ironport-sdr: tcPL1OS48t9WCPHElOZERB02mnYsCec8GziUNLcgrMv0lGAbw5ORGHdYjCWU3DtTHspRCLrlWY IWru/Y6Bjcxme8U94WuunZscC0LZdSi6wMonc4Je9qBZLFsIDqPqD8lrOkXRmJ2G0TXZXx9cdB 96VWt7jFvwr2JVog2R7/3dpmbslmIU9Le97BsQZj/xr26+8cIhZVZ4pAdI2NoVyS3LqQIFU1g2 treK9jRvLsaDXzebSrqBexx2YCbZ4fvFwp1+PtRRbjpYqqp/HsJkNwCzz4U6dqBCYYdXMKHtwe JZU=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16/06/2021 17:04, Jan Beulich wrote:
> All,
>
> several years back do_memory_op() in libxc was changed to have "long"
> return type. This is because some of the sub-ops return potentially
> large values as the hypercall return value (i.e. not in an argument
> structure field). This change, however, didn't have the intended
> effect from all I can tell, which apparently manifests in the present
> two remaining ovmf failures in the staging osstest flights. Anthony
> tells me that ovmf as of not very long ago puts the shared info page
> at a really high address, thus making the p2m of the guest very large.
> Its size gets returned by XENMEM_maximum_gpfn, as function return
> value.
>
> Since hypercalls from the tool stack are based on ioctl(), and since
> ioctl() has a return type of "int", I'm afraid there's no way we can
> deal with this by adjusting function return types in the libraries.
> Instead we appear to need either a new privcmd ioctl or new XENMEM_*
> subops (for those cases where potentially large values get returned).
>
> Until we manage to deal with this I wonder whether we should suggest
> to the ovmf folks to undo that change. I'm anyway not really
> convinced this aggressive enlarging of the p2m is a good idea. There
> are a number of cases in the hypervisor where we try to reduce GFN
> ranges based on this upper bound, and there in particular is a loop
> in mem-sharing code going all the way up to that limit. EPT P2M
> dumping also has such a loop.

There are multiple things in here which are disappointing, but I think
they've mostly been known already.

But I do agree that this is very much another nail in the coffin of the
ioctl ABI.

For ABIv2, there are many changes needed, and this ioctl ABI was never
going to survive, for other reasons too.  Obviously, we can't wait for
ABIv2 to fix this immediate issue.

However, I think it might be reasonable to wait for ABIv2 until we can
reasonably support VMs larger than 8T(?).

For now, I'd agree with trying to undo the change in OVMF.

~Andrew




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.